00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 601 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3263 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.036 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.036 The recommended git tool is: git 00:00:00.037 using credential 00000000-0000-0000-0000-000000000002 00:00:00.038 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.060 Fetching changes from the remote Git repository 00:00:00.064 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.103 Using shallow fetch with depth 1 00:00:00.103 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.103 > git --version # timeout=10 00:00:00.151 > git --version # 'git version 2.39.2' 00:00:00.151 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.192 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.192 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.272 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.284 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.295 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.295 > git config core.sparsecheckout # timeout=10 00:00:03.307 > git read-tree -mu HEAD # timeout=10 00:00:03.324 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.342 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.342 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.425 [Pipeline] Start of Pipeline 00:00:03.440 [Pipeline] library 00:00:03.442 Loading library shm_lib@master 00:00:03.442 Library shm_lib@master is cached. Copying from home. 00:00:03.460 [Pipeline] node 00:00:03.469 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.471 [Pipeline] { 00:00:03.485 [Pipeline] catchError 00:00:03.487 [Pipeline] { 00:00:03.503 [Pipeline] wrap 00:00:03.514 [Pipeline] { 00:00:03.523 [Pipeline] stage 00:00:03.525 [Pipeline] { (Prologue) 00:00:03.734 [Pipeline] sh 00:00:04.016 + logger -p user.info -t JENKINS-CI 00:00:04.031 [Pipeline] echo 00:00:04.032 Node: WFP8 00:00:04.037 [Pipeline] sh 00:00:04.329 [Pipeline] setCustomBuildProperty 00:00:04.341 [Pipeline] echo 00:00:04.342 Cleanup processes 00:00:04.348 [Pipeline] sh 00:00:04.630 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.630 1073419 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.642 [Pipeline] sh 00:00:04.922 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.922 ++ grep -v 'sudo pgrep' 00:00:04.922 ++ awk '{print $1}' 00:00:04.922 + sudo kill -9 00:00:04.922 + true 00:00:04.933 [Pipeline] cleanWs 00:00:04.941 [WS-CLEANUP] Deleting project workspace... 00:00:04.941 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.947 [WS-CLEANUP] done 00:00:04.950 [Pipeline] setCustomBuildProperty 00:00:04.961 [Pipeline] sh 00:00:05.238 + sudo git config --global --replace-all safe.directory '*' 00:00:05.299 [Pipeline] httpRequest 00:00:05.329 [Pipeline] echo 00:00:05.330 Sorcerer 10.211.164.101 is alive 00:00:05.337 [Pipeline] httpRequest 00:00:05.341 HttpMethod: GET 00:00:05.341 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.342 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.357 Response Code: HTTP/1.1 200 OK 00:00:05.358 Success: Status code 200 is in the accepted range: 200,404 00:00:05.358 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.646 [Pipeline] sh 00:00:06.927 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.939 [Pipeline] httpRequest 00:00:06.974 [Pipeline] echo 00:00:06.975 Sorcerer 10.211.164.101 is alive 00:00:06.984 [Pipeline] httpRequest 00:00:06.988 HttpMethod: GET 00:00:06.989 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:06.989 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:07.005 Response Code: HTTP/1.1 200 OK 00:00:07.006 Success: Status code 200 is in the accepted range: 200,404 00:00:07.006 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:59.847 [Pipeline] sh 00:01:00.136 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:02.683 [Pipeline] sh 00:01:02.968 + git -C spdk log --oneline -n5 00:01:02.968 719d03c6a sock/uring: only register net impl if supported 00:01:02.968 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:02.968 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:02.968 6c7c1f57e accel: add sequence outstanding stat 00:01:02.968 3bc8e6a26 accel: add utility to put task 00:01:02.987 [Pipeline] withCredentials 00:01:02.998 > git --version # timeout=10 00:01:03.010 > git --version # 'git version 2.39.2' 00:01:03.035 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:03.037 [Pipeline] { 00:01:03.046 [Pipeline] retry 00:01:03.049 [Pipeline] { 00:01:03.066 [Pipeline] sh 00:01:03.635 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:03.908 [Pipeline] } 00:01:03.932 [Pipeline] // retry 00:01:03.938 [Pipeline] } 00:01:03.957 [Pipeline] // withCredentials 00:01:03.967 [Pipeline] httpRequest 00:01:03.987 [Pipeline] echo 00:01:03.988 Sorcerer 10.211.164.101 is alive 00:01:03.998 [Pipeline] httpRequest 00:01:04.003 HttpMethod: GET 00:01:04.004 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:04.004 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:04.011 Response Code: HTTP/1.1 200 OK 00:01:04.012 Success: Status code 200 is in the accepted range: 200,404 00:01:04.012 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:16.306 [Pipeline] sh 00:01:16.592 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:17.985 [Pipeline] sh 00:01:18.274 + git -C dpdk log --oneline -n5 00:01:18.274 eeb0605f11 version: 23.11.0 00:01:18.274 238778122a doc: update release notes for 23.11 00:01:18.274 46aa6b3cfc doc: fix description of RSS features 00:01:18.274 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:18.274 7e421ae345 devtools: support skipping forbid rule check 00:01:18.285 [Pipeline] } 00:01:18.302 [Pipeline] // stage 00:01:18.310 [Pipeline] stage 00:01:18.312 [Pipeline] { (Prepare) 00:01:18.335 [Pipeline] writeFile 00:01:18.355 [Pipeline] sh 00:01:18.638 + logger -p user.info -t JENKINS-CI 00:01:18.651 [Pipeline] sh 00:01:18.935 + logger -p user.info -t JENKINS-CI 00:01:18.949 [Pipeline] sh 00:01:19.233 + cat autorun-spdk.conf 00:01:19.233 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.233 SPDK_TEST_NVMF=1 00:01:19.233 SPDK_TEST_NVME_CLI=1 00:01:19.233 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.233 SPDK_TEST_NVMF_NICS=e810 00:01:19.233 SPDK_TEST_VFIOUSER=1 00:01:19.233 SPDK_RUN_UBSAN=1 00:01:19.233 NET_TYPE=phy 00:01:19.233 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:19.233 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:19.240 RUN_NIGHTLY=1 00:01:19.245 [Pipeline] readFile 00:01:19.273 [Pipeline] withEnv 00:01:19.275 [Pipeline] { 00:01:19.290 [Pipeline] sh 00:01:19.576 + set -ex 00:01:19.576 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:19.576 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.576 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.576 ++ SPDK_TEST_NVMF=1 00:01:19.576 ++ SPDK_TEST_NVME_CLI=1 00:01:19.576 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.576 ++ SPDK_TEST_NVMF_NICS=e810 00:01:19.576 ++ SPDK_TEST_VFIOUSER=1 00:01:19.576 ++ SPDK_RUN_UBSAN=1 00:01:19.576 ++ NET_TYPE=phy 00:01:19.576 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:19.576 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:19.576 ++ RUN_NIGHTLY=1 00:01:19.576 + case $SPDK_TEST_NVMF_NICS in 00:01:19.576 + DRIVERS=ice 00:01:19.576 + [[ tcp == \r\d\m\a ]] 00:01:19.576 + [[ -n ice ]] 00:01:19.576 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:19.576 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:19.576 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:19.576 rmmod: ERROR: Module irdma is not currently loaded 00:01:19.576 rmmod: ERROR: Module i40iw is not currently loaded 00:01:19.576 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:19.576 + true 00:01:19.576 + for D in $DRIVERS 00:01:19.576 + sudo modprobe ice 00:01:19.576 + exit 0 00:01:19.586 [Pipeline] } 00:01:19.605 [Pipeline] // withEnv 00:01:19.611 [Pipeline] } 00:01:19.629 [Pipeline] // stage 00:01:19.642 [Pipeline] catchError 00:01:19.644 [Pipeline] { 00:01:19.661 [Pipeline] timeout 00:01:19.661 Timeout set to expire in 50 min 00:01:19.663 [Pipeline] { 00:01:19.679 [Pipeline] stage 00:01:19.681 [Pipeline] { (Tests) 00:01:19.702 [Pipeline] sh 00:01:19.989 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.989 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.989 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.989 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:19.989 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.989 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:19.989 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:19.989 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:19.989 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:19.989 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:19.989 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:19.989 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.989 + source /etc/os-release 00:01:19.989 ++ NAME='Fedora Linux' 00:01:19.989 ++ VERSION='38 (Cloud Edition)' 00:01:19.989 ++ ID=fedora 00:01:19.989 ++ VERSION_ID=38 00:01:19.989 ++ VERSION_CODENAME= 00:01:19.989 ++ PLATFORM_ID=platform:f38 00:01:19.989 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:19.989 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:19.989 ++ LOGO=fedora-logo-icon 00:01:19.989 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:19.989 ++ HOME_URL=https://fedoraproject.org/ 00:01:19.989 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:19.989 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:19.989 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:19.989 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:19.989 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:19.989 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:19.989 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:19.989 ++ SUPPORT_END=2024-05-14 00:01:19.989 ++ VARIANT='Cloud Edition' 00:01:19.989 ++ VARIANT_ID=cloud 00:01:19.989 + uname -a 00:01:19.989 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:19.989 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:22.530 Hugepages 00:01:22.530 node hugesize free / total 00:01:22.530 node0 1048576kB 0 / 0 00:01:22.530 node0 2048kB 0 / 0 00:01:22.530 node1 1048576kB 0 / 0 00:01:22.530 node1 2048kB 0 / 0 00:01:22.530 00:01:22.530 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.530 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:22.530 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:22.530 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:22.530 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:22.530 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:22.530 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:22.530 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:22.530 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:22.530 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:22.530 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:22.530 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:22.530 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:22.530 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:22.530 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:22.530 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:22.530 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:22.530 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:22.530 + rm -f /tmp/spdk-ld-path 00:01:22.530 + source autorun-spdk.conf 00:01:22.530 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.530 ++ SPDK_TEST_NVMF=1 00:01:22.530 ++ SPDK_TEST_NVME_CLI=1 00:01:22.530 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.530 ++ SPDK_TEST_NVMF_NICS=e810 00:01:22.530 ++ SPDK_TEST_VFIOUSER=1 00:01:22.530 ++ SPDK_RUN_UBSAN=1 00:01:22.530 ++ NET_TYPE=phy 00:01:22.530 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:22.530 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.530 ++ RUN_NIGHTLY=1 00:01:22.530 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.530 + [[ -n '' ]] 00:01:22.530 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.530 + for M in /var/spdk/build-*-manifest.txt 00:01:22.530 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.530 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:22.530 + for M in /var/spdk/build-*-manifest.txt 00:01:22.530 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.530 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:22.530 ++ uname 00:01:22.530 + [[ Linux == \L\i\n\u\x ]] 00:01:22.530 + sudo dmesg -T 00:01:22.530 + sudo dmesg --clear 00:01:22.530 + dmesg_pid=1074375 00:01:22.530 + [[ Fedora Linux == FreeBSD ]] 00:01:22.530 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.530 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.530 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.530 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.530 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.530 + sudo dmesg -Tw 00:01:22.530 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.530 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.530 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.530 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.530 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.530 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.530 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.530 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.530 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.530 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:22.530 Test configuration: 00:01:22.530 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.530 SPDK_TEST_NVMF=1 00:01:22.530 SPDK_TEST_NVME_CLI=1 00:01:22.530 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.530 SPDK_TEST_NVMF_NICS=e810 00:01:22.530 SPDK_TEST_VFIOUSER=1 00:01:22.530 SPDK_RUN_UBSAN=1 00:01:22.530 NET_TYPE=phy 00:01:22.530 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:22.530 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.789 RUN_NIGHTLY=1 00:26:34 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:22.789 00:26:34 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.789 00:26:34 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.789 00:26:34 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.789 00:26:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.789 00:26:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.789 00:26:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.789 00:26:34 -- paths/export.sh@5 -- $ export PATH 00:01:22.789 00:26:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.789 00:26:34 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:22.789 00:26:34 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:22.789 00:26:34 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720823194.XXXXXX 00:01:22.789 00:26:34 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720823194.XWu10y 00:01:22.789 00:26:34 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:22.789 00:26:34 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:01:22.789 00:26:34 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.789 00:26:34 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:22.789 00:26:34 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:22.789 00:26:34 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.789 00:26:34 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:22.789 00:26:34 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:22.789 00:26:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.789 00:26:34 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:22.789 00:26:34 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:22.789 00:26:34 -- pm/common@17 -- $ local monitor 00:01:22.789 00:26:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.789 00:26:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.789 00:26:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.789 00:26:34 -- pm/common@21 -- $ date +%s 00:01:22.789 00:26:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.789 00:26:34 -- pm/common@21 -- $ date +%s 00:01:22.789 00:26:34 -- pm/common@25 -- $ sleep 1 00:01:22.789 00:26:34 -- pm/common@21 -- $ date +%s 00:01:22.789 00:26:34 -- pm/common@21 -- $ date +%s 00:01:22.789 00:26:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720823194 00:01:22.789 00:26:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720823194 00:01:22.789 00:26:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720823194 00:01:22.789 00:26:34 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720823194 00:01:22.790 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720823194_collect-vmstat.pm.log 00:01:22.790 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720823194_collect-cpu-load.pm.log 00:01:22.790 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720823194_collect-cpu-temp.pm.log 00:01:22.790 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720823194_collect-bmc-pm.bmc.pm.log 00:01:23.752 00:26:35 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:23.752 00:26:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:23.752 00:26:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:23.752 00:26:35 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.752 00:26:35 -- spdk/autobuild.sh@16 -- $ date -u 00:01:23.752 Fri Jul 12 10:26:35 PM UTC 2024 00:01:23.752 00:26:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:23.752 v24.09-pre-202-g719d03c6a 00:01:23.752 00:26:35 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:23.752 00:26:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:23.752 00:26:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:23.752 00:26:35 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:23.752 00:26:35 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:23.752 00:26:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.752 ************************************ 00:01:23.752 START TEST ubsan 00:01:23.752 ************************************ 00:01:23.752 00:26:35 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:23.752 using ubsan 00:01:23.752 00:01:23.752 real 0m0.000s 00:01:23.752 user 0m0.000s 00:01:23.752 sys 0m0.000s 00:01:23.752 00:26:35 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:23.752 00:26:35 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.752 ************************************ 00:01:23.752 END TEST ubsan 00:01:23.752 ************************************ 00:01:23.752 00:26:35 -- common/autotest_common.sh@1142 -- $ return 0 00:01:23.752 00:26:35 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:23.752 00:26:35 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:23.752 00:26:35 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:23.752 00:26:35 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:23.752 00:26:35 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:23.752 00:26:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.011 ************************************ 00:01:24.011 START TEST build_native_dpdk 00:01:24.011 ************************************ 00:01:24.011 00:26:35 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:24.011 eeb0605f11 version: 23.11.0 00:01:24.011 238778122a doc: update release notes for 23.11 00:01:24.011 46aa6b3cfc doc: fix description of RSS features 00:01:24.011 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:24.011 7e421ae345 devtools: support skipping forbid rule check 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:24.011 00:26:35 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:24.012 00:26:35 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:24.012 00:26:35 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:24.012 00:26:35 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:24.012 00:26:35 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:24.012 00:26:35 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:24.012 00:26:35 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:24.012 00:26:35 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:24.012 00:26:35 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:24.012 00:26:35 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:24.012 patching file config/rte_config.h 00:01:24.012 Hunk #1 succeeded at 60 (offset 1 line). 00:01:24.012 00:26:35 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:24.012 00:26:35 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:24.012 00:26:35 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:24.012 00:26:35 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:24.012 00:26:35 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:28.206 The Meson build system 00:01:28.206 Version: 1.3.1 00:01:28.206 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:28.206 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:28.206 Build type: native build 00:01:28.206 Program cat found: YES (/usr/bin/cat) 00:01:28.206 Project name: DPDK 00:01:28.206 Project version: 23.11.0 00:01:28.206 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:28.206 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:28.206 Host machine cpu family: x86_64 00:01:28.206 Host machine cpu: x86_64 00:01:28.206 Message: ## Building in Developer Mode ## 00:01:28.206 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:28.206 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:28.206 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:28.206 Program python3 found: YES (/usr/bin/python3) 00:01:28.206 Program cat found: YES (/usr/bin/cat) 00:01:28.206 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:28.206 Compiler for C supports arguments -march=native: YES 00:01:28.206 Checking for size of "void *" : 8 00:01:28.206 Checking for size of "void *" : 8 (cached) 00:01:28.206 Library m found: YES 00:01:28.206 Library numa found: YES 00:01:28.206 Has header "numaif.h" : YES 00:01:28.206 Library fdt found: NO 00:01:28.206 Library execinfo found: NO 00:01:28.206 Has header "execinfo.h" : YES 00:01:28.206 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:28.206 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:28.206 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:28.206 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:28.206 Run-time dependency openssl found: YES 3.0.9 00:01:28.206 Run-time dependency libpcap found: YES 1.10.4 00:01:28.206 Has header "pcap.h" with dependency libpcap: YES 00:01:28.206 Compiler for C supports arguments -Wcast-qual: YES 00:01:28.206 Compiler for C supports arguments -Wdeprecated: YES 00:01:28.206 Compiler for C supports arguments -Wformat: YES 00:01:28.206 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:28.206 Compiler for C supports arguments -Wformat-security: NO 00:01:28.206 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:28.206 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:28.206 Compiler for C supports arguments -Wnested-externs: YES 00:01:28.206 Compiler for C supports arguments -Wold-style-definition: YES 00:01:28.206 Compiler for C supports arguments -Wpointer-arith: YES 00:01:28.206 Compiler for C supports arguments -Wsign-compare: YES 00:01:28.206 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:28.206 Compiler for C supports arguments -Wundef: YES 00:01:28.206 Compiler for C supports arguments -Wwrite-strings: YES 00:01:28.206 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:28.206 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:28.206 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:28.206 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:28.206 Program objdump found: YES (/usr/bin/objdump) 00:01:28.206 Compiler for C supports arguments -mavx512f: YES 00:01:28.206 Checking if "AVX512 checking" compiles: YES 00:01:28.206 Fetching value of define "__SSE4_2__" : 1 00:01:28.206 Fetching value of define "__AES__" : 1 00:01:28.206 Fetching value of define "__AVX__" : 1 00:01:28.206 Fetching value of define "__AVX2__" : 1 00:01:28.206 Fetching value of define "__AVX512BW__" : 1 00:01:28.206 Fetching value of define "__AVX512CD__" : 1 00:01:28.206 Fetching value of define "__AVX512DQ__" : 1 00:01:28.206 Fetching value of define "__AVX512F__" : 1 00:01:28.206 Fetching value of define "__AVX512VL__" : 1 00:01:28.206 Fetching value of define "__PCLMUL__" : 1 00:01:28.206 Fetching value of define "__RDRND__" : 1 00:01:28.206 Fetching value of define "__RDSEED__" : 1 00:01:28.206 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:28.206 Fetching value of define "__znver1__" : (undefined) 00:01:28.206 Fetching value of define "__znver2__" : (undefined) 00:01:28.206 Fetching value of define "__znver3__" : (undefined) 00:01:28.206 Fetching value of define "__znver4__" : (undefined) 00:01:28.206 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:28.206 Message: lib/log: Defining dependency "log" 00:01:28.206 Message: lib/kvargs: Defining dependency "kvargs" 00:01:28.206 Message: lib/telemetry: Defining dependency "telemetry" 00:01:28.206 Checking for function "getentropy" : NO 00:01:28.206 Message: lib/eal: Defining dependency "eal" 00:01:28.206 Message: lib/ring: Defining dependency "ring" 00:01:28.206 Message: lib/rcu: Defining dependency "rcu" 00:01:28.206 Message: lib/mempool: Defining dependency "mempool" 00:01:28.206 Message: lib/mbuf: Defining dependency "mbuf" 00:01:28.206 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:28.206 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:28.206 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:28.206 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:28.206 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:28.206 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:28.206 Compiler for C supports arguments -mpclmul: YES 00:01:28.206 Compiler for C supports arguments -maes: YES 00:01:28.206 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:28.206 Compiler for C supports arguments -mavx512bw: YES 00:01:28.206 Compiler for C supports arguments -mavx512dq: YES 00:01:28.206 Compiler for C supports arguments -mavx512vl: YES 00:01:28.206 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:28.206 Compiler for C supports arguments -mavx2: YES 00:01:28.206 Compiler for C supports arguments -mavx: YES 00:01:28.206 Message: lib/net: Defining dependency "net" 00:01:28.206 Message: lib/meter: Defining dependency "meter" 00:01:28.206 Message: lib/ethdev: Defining dependency "ethdev" 00:01:28.206 Message: lib/pci: Defining dependency "pci" 00:01:28.206 Message: lib/cmdline: Defining dependency "cmdline" 00:01:28.206 Message: lib/metrics: Defining dependency "metrics" 00:01:28.206 Message: lib/hash: Defining dependency "hash" 00:01:28.206 Message: lib/timer: Defining dependency "timer" 00:01:28.206 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:28.206 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:28.206 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:28.206 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:28.206 Message: lib/acl: Defining dependency "acl" 00:01:28.206 Message: lib/bbdev: Defining dependency "bbdev" 00:01:28.206 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:28.206 Run-time dependency libelf found: YES 0.190 00:01:28.206 Message: lib/bpf: Defining dependency "bpf" 00:01:28.206 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:28.206 Message: lib/compressdev: Defining dependency "compressdev" 00:01:28.206 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:28.206 Message: lib/distributor: Defining dependency "distributor" 00:01:28.206 Message: lib/dmadev: Defining dependency "dmadev" 00:01:28.206 Message: lib/efd: Defining dependency "efd" 00:01:28.206 Message: lib/eventdev: Defining dependency "eventdev" 00:01:28.206 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:28.206 Message: lib/gpudev: Defining dependency "gpudev" 00:01:28.206 Message: lib/gro: Defining dependency "gro" 00:01:28.206 Message: lib/gso: Defining dependency "gso" 00:01:28.206 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:28.206 Message: lib/jobstats: Defining dependency "jobstats" 00:01:28.206 Message: lib/latencystats: Defining dependency "latencystats" 00:01:28.206 Message: lib/lpm: Defining dependency "lpm" 00:01:28.206 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:28.206 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:28.206 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:28.206 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:28.206 Message: lib/member: Defining dependency "member" 00:01:28.206 Message: lib/pcapng: Defining dependency "pcapng" 00:01:28.206 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:28.206 Message: lib/power: Defining dependency "power" 00:01:28.206 Message: lib/rawdev: Defining dependency "rawdev" 00:01:28.206 Message: lib/regexdev: Defining dependency "regexdev" 00:01:28.206 Message: lib/mldev: Defining dependency "mldev" 00:01:28.206 Message: lib/rib: Defining dependency "rib" 00:01:28.206 Message: lib/reorder: Defining dependency "reorder" 00:01:28.206 Message: lib/sched: Defining dependency "sched" 00:01:28.206 Message: lib/security: Defining dependency "security" 00:01:28.206 Message: lib/stack: Defining dependency "stack" 00:01:28.206 Has header "linux/userfaultfd.h" : YES 00:01:28.206 Has header "linux/vduse.h" : YES 00:01:28.206 Message: lib/vhost: Defining dependency "vhost" 00:01:28.206 Message: lib/ipsec: Defining dependency "ipsec" 00:01:28.206 Message: lib/pdcp: Defining dependency "pdcp" 00:01:28.206 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:28.206 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:28.206 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:28.206 Message: lib/fib: Defining dependency "fib" 00:01:28.206 Message: lib/port: Defining dependency "port" 00:01:28.206 Message: lib/pdump: Defining dependency "pdump" 00:01:28.206 Message: lib/table: Defining dependency "table" 00:01:28.206 Message: lib/pipeline: Defining dependency "pipeline" 00:01:28.206 Message: lib/graph: Defining dependency "graph" 00:01:28.206 Message: lib/node: Defining dependency "node" 00:01:28.206 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:29.587 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:29.587 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:29.587 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:29.587 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:29.587 Compiler for C supports arguments -Wno-unused-value: YES 00:01:29.587 Compiler for C supports arguments -Wno-format: YES 00:01:29.587 Compiler for C supports arguments -Wno-format-security: YES 00:01:29.587 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:29.587 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:29.587 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:29.587 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:29.587 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:29.587 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:29.587 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:29.587 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:29.587 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:29.587 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:29.587 Has header "sys/epoll.h" : YES 00:01:29.587 Program doxygen found: YES (/usr/bin/doxygen) 00:01:29.587 Configuring doxy-api-html.conf using configuration 00:01:29.587 Configuring doxy-api-man.conf using configuration 00:01:29.587 Program mandb found: YES (/usr/bin/mandb) 00:01:29.587 Program sphinx-build found: NO 00:01:29.587 Configuring rte_build_config.h using configuration 00:01:29.587 Message: 00:01:29.587 ================= 00:01:29.587 Applications Enabled 00:01:29.587 ================= 00:01:29.587 00:01:29.587 apps: 00:01:29.587 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:29.587 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:29.587 test-pmd, test-regex, test-sad, test-security-perf, 00:01:29.587 00:01:29.587 Message: 00:01:29.587 ================= 00:01:29.587 Libraries Enabled 00:01:29.587 ================= 00:01:29.587 00:01:29.587 libs: 00:01:29.587 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:29.587 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:29.587 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:29.587 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:29.587 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:29.587 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:29.587 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:29.587 00:01:29.587 00:01:29.587 Message: 00:01:29.587 =============== 00:01:29.587 Drivers Enabled 00:01:29.587 =============== 00:01:29.587 00:01:29.587 common: 00:01:29.587 00:01:29.587 bus: 00:01:29.587 pci, vdev, 00:01:29.587 mempool: 00:01:29.587 ring, 00:01:29.587 dma: 00:01:29.587 00:01:29.587 net: 00:01:29.587 i40e, 00:01:29.587 raw: 00:01:29.587 00:01:29.587 crypto: 00:01:29.587 00:01:29.587 compress: 00:01:29.587 00:01:29.587 regex: 00:01:29.587 00:01:29.587 ml: 00:01:29.587 00:01:29.587 vdpa: 00:01:29.587 00:01:29.587 event: 00:01:29.587 00:01:29.587 baseband: 00:01:29.587 00:01:29.587 gpu: 00:01:29.587 00:01:29.587 00:01:29.587 Message: 00:01:29.587 ================= 00:01:29.587 Content Skipped 00:01:29.587 ================= 00:01:29.587 00:01:29.587 apps: 00:01:29.587 00:01:29.587 libs: 00:01:29.587 00:01:29.587 drivers: 00:01:29.587 common/cpt: not in enabled drivers build config 00:01:29.587 common/dpaax: not in enabled drivers build config 00:01:29.587 common/iavf: not in enabled drivers build config 00:01:29.587 common/idpf: not in enabled drivers build config 00:01:29.587 common/mvep: not in enabled drivers build config 00:01:29.587 common/octeontx: not in enabled drivers build config 00:01:29.587 bus/auxiliary: not in enabled drivers build config 00:01:29.587 bus/cdx: not in enabled drivers build config 00:01:29.587 bus/dpaa: not in enabled drivers build config 00:01:29.587 bus/fslmc: not in enabled drivers build config 00:01:29.587 bus/ifpga: not in enabled drivers build config 00:01:29.587 bus/platform: not in enabled drivers build config 00:01:29.587 bus/vmbus: not in enabled drivers build config 00:01:29.587 common/cnxk: not in enabled drivers build config 00:01:29.587 common/mlx5: not in enabled drivers build config 00:01:29.587 common/nfp: not in enabled drivers build config 00:01:29.587 common/qat: not in enabled drivers build config 00:01:29.587 common/sfc_efx: not in enabled drivers build config 00:01:29.587 mempool/bucket: not in enabled drivers build config 00:01:29.587 mempool/cnxk: not in enabled drivers build config 00:01:29.587 mempool/dpaa: not in enabled drivers build config 00:01:29.587 mempool/dpaa2: not in enabled drivers build config 00:01:29.587 mempool/octeontx: not in enabled drivers build config 00:01:29.587 mempool/stack: not in enabled drivers build config 00:01:29.587 dma/cnxk: not in enabled drivers build config 00:01:29.587 dma/dpaa: not in enabled drivers build config 00:01:29.587 dma/dpaa2: not in enabled drivers build config 00:01:29.587 dma/hisilicon: not in enabled drivers build config 00:01:29.587 dma/idxd: not in enabled drivers build config 00:01:29.587 dma/ioat: not in enabled drivers build config 00:01:29.587 dma/skeleton: not in enabled drivers build config 00:01:29.587 net/af_packet: not in enabled drivers build config 00:01:29.587 net/af_xdp: not in enabled drivers build config 00:01:29.587 net/ark: not in enabled drivers build config 00:01:29.587 net/atlantic: not in enabled drivers build config 00:01:29.587 net/avp: not in enabled drivers build config 00:01:29.587 net/axgbe: not in enabled drivers build config 00:01:29.587 net/bnx2x: not in enabled drivers build config 00:01:29.587 net/bnxt: not in enabled drivers build config 00:01:29.587 net/bonding: not in enabled drivers build config 00:01:29.587 net/cnxk: not in enabled drivers build config 00:01:29.587 net/cpfl: not in enabled drivers build config 00:01:29.587 net/cxgbe: not in enabled drivers build config 00:01:29.587 net/dpaa: not in enabled drivers build config 00:01:29.587 net/dpaa2: not in enabled drivers build config 00:01:29.587 net/e1000: not in enabled drivers build config 00:01:29.587 net/ena: not in enabled drivers build config 00:01:29.587 net/enetc: not in enabled drivers build config 00:01:29.587 net/enetfec: not in enabled drivers build config 00:01:29.587 net/enic: not in enabled drivers build config 00:01:29.587 net/failsafe: not in enabled drivers build config 00:01:29.587 net/fm10k: not in enabled drivers build config 00:01:29.587 net/gve: not in enabled drivers build config 00:01:29.587 net/hinic: not in enabled drivers build config 00:01:29.587 net/hns3: not in enabled drivers build config 00:01:29.587 net/iavf: not in enabled drivers build config 00:01:29.587 net/ice: not in enabled drivers build config 00:01:29.587 net/idpf: not in enabled drivers build config 00:01:29.587 net/igc: not in enabled drivers build config 00:01:29.587 net/ionic: not in enabled drivers build config 00:01:29.587 net/ipn3ke: not in enabled drivers build config 00:01:29.587 net/ixgbe: not in enabled drivers build config 00:01:29.587 net/mana: not in enabled drivers build config 00:01:29.587 net/memif: not in enabled drivers build config 00:01:29.587 net/mlx4: not in enabled drivers build config 00:01:29.587 net/mlx5: not in enabled drivers build config 00:01:29.587 net/mvneta: not in enabled drivers build config 00:01:29.587 net/mvpp2: not in enabled drivers build config 00:01:29.587 net/netvsc: not in enabled drivers build config 00:01:29.587 net/nfb: not in enabled drivers build config 00:01:29.587 net/nfp: not in enabled drivers build config 00:01:29.587 net/ngbe: not in enabled drivers build config 00:01:29.587 net/null: not in enabled drivers build config 00:01:29.587 net/octeontx: not in enabled drivers build config 00:01:29.587 net/octeon_ep: not in enabled drivers build config 00:01:29.587 net/pcap: not in enabled drivers build config 00:01:29.587 net/pfe: not in enabled drivers build config 00:01:29.587 net/qede: not in enabled drivers build config 00:01:29.587 net/ring: not in enabled drivers build config 00:01:29.587 net/sfc: not in enabled drivers build config 00:01:29.587 net/softnic: not in enabled drivers build config 00:01:29.587 net/tap: not in enabled drivers build config 00:01:29.587 net/thunderx: not in enabled drivers build config 00:01:29.587 net/txgbe: not in enabled drivers build config 00:01:29.587 net/vdev_netvsc: not in enabled drivers build config 00:01:29.587 net/vhost: not in enabled drivers build config 00:01:29.587 net/virtio: not in enabled drivers build config 00:01:29.587 net/vmxnet3: not in enabled drivers build config 00:01:29.587 raw/cnxk_bphy: not in enabled drivers build config 00:01:29.587 raw/cnxk_gpio: not in enabled drivers build config 00:01:29.587 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:29.587 raw/ifpga: not in enabled drivers build config 00:01:29.587 raw/ntb: not in enabled drivers build config 00:01:29.587 raw/skeleton: not in enabled drivers build config 00:01:29.587 crypto/armv8: not in enabled drivers build config 00:01:29.587 crypto/bcmfs: not in enabled drivers build config 00:01:29.587 crypto/caam_jr: not in enabled drivers build config 00:01:29.587 crypto/ccp: not in enabled drivers build config 00:01:29.587 crypto/cnxk: not in enabled drivers build config 00:01:29.587 crypto/dpaa_sec: not in enabled drivers build config 00:01:29.587 crypto/dpaa2_sec: not in enabled drivers build config 00:01:29.587 crypto/ipsec_mb: not in enabled drivers build config 00:01:29.587 crypto/mlx5: not in enabled drivers build config 00:01:29.587 crypto/mvsam: not in enabled drivers build config 00:01:29.587 crypto/nitrox: not in enabled drivers build config 00:01:29.587 crypto/null: not in enabled drivers build config 00:01:29.587 crypto/octeontx: not in enabled drivers build config 00:01:29.587 crypto/openssl: not in enabled drivers build config 00:01:29.587 crypto/scheduler: not in enabled drivers build config 00:01:29.587 crypto/uadk: not in enabled drivers build config 00:01:29.587 crypto/virtio: not in enabled drivers build config 00:01:29.587 compress/isal: not in enabled drivers build config 00:01:29.587 compress/mlx5: not in enabled drivers build config 00:01:29.587 compress/octeontx: not in enabled drivers build config 00:01:29.587 compress/zlib: not in enabled drivers build config 00:01:29.587 regex/mlx5: not in enabled drivers build config 00:01:29.587 regex/cn9k: not in enabled drivers build config 00:01:29.587 ml/cnxk: not in enabled drivers build config 00:01:29.587 vdpa/ifc: not in enabled drivers build config 00:01:29.587 vdpa/mlx5: not in enabled drivers build config 00:01:29.587 vdpa/nfp: not in enabled drivers build config 00:01:29.587 vdpa/sfc: not in enabled drivers build config 00:01:29.587 event/cnxk: not in enabled drivers build config 00:01:29.587 event/dlb2: not in enabled drivers build config 00:01:29.587 event/dpaa: not in enabled drivers build config 00:01:29.587 event/dpaa2: not in enabled drivers build config 00:01:29.587 event/dsw: not in enabled drivers build config 00:01:29.587 event/opdl: not in enabled drivers build config 00:01:29.587 event/skeleton: not in enabled drivers build config 00:01:29.587 event/sw: not in enabled drivers build config 00:01:29.587 event/octeontx: not in enabled drivers build config 00:01:29.587 baseband/acc: not in enabled drivers build config 00:01:29.587 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:29.587 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:29.587 baseband/la12xx: not in enabled drivers build config 00:01:29.587 baseband/null: not in enabled drivers build config 00:01:29.587 baseband/turbo_sw: not in enabled drivers build config 00:01:29.587 gpu/cuda: not in enabled drivers build config 00:01:29.587 00:01:29.587 00:01:29.587 Build targets in project: 217 00:01:29.587 00:01:29.587 DPDK 23.11.0 00:01:29.587 00:01:29.587 User defined options 00:01:29.587 libdir : lib 00:01:29.587 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.587 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:29.587 c_link_args : 00:01:29.587 enable_docs : false 00:01:29.587 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:29.587 enable_kmods : false 00:01:29.587 machine : native 00:01:29.587 tests : false 00:01:29.587 00:01:29.587 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:29.587 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:29.587 00:26:41 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:01:29.587 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:29.852 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:29.852 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:29.852 [3/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:29.852 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:29.852 [5/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:29.852 [6/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:29.852 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:29.852 [8/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:29.852 [9/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:29.852 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:29.852 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:29.852 [12/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:29.852 [13/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:29.852 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:29.852 [15/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:30.135 [16/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:30.135 [17/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:30.135 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:30.135 [19/707] Linking static target lib/librte_kvargs.a 00:01:30.135 [20/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:30.135 [21/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:30.135 [22/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:30.135 [23/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:30.135 [24/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:30.135 [25/707] Linking static target lib/librte_pci.a 00:01:30.135 [26/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:30.135 [27/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:30.135 [28/707] Linking static target lib/librte_log.a 00:01:30.135 [29/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:30.135 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:30.135 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:30.135 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:30.135 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:30.135 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:30.135 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:30.454 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:30.455 [37/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.455 [38/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:30.455 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:30.455 [40/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:30.455 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:30.455 [42/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.455 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:30.455 [44/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:30.455 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:30.455 [46/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:30.455 [47/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:30.455 [48/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:30.455 [49/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:30.455 [50/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:30.455 [51/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:30.455 [52/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:30.455 [53/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:30.455 [54/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:30.455 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:30.455 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:30.455 [57/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:30.455 [58/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:30.455 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:30.455 [60/707] Linking static target lib/librte_ring.a 00:01:30.455 [61/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:30.455 [62/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:30.455 [63/707] Linking static target lib/librte_meter.a 00:01:30.455 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:30.455 [65/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:30.455 [66/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:30.455 [67/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:30.455 [68/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:30.455 [69/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:30.455 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:30.455 [71/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:30.719 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:30.719 [73/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:30.719 [74/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:30.719 [75/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:30.719 [76/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:30.719 [77/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:30.719 [78/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:30.719 [79/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:30.719 [80/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:30.719 [81/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:30.719 [82/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:30.719 [83/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:30.719 [84/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:30.719 [85/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:30.719 [86/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:30.719 [87/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:30.719 [88/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:30.719 [89/707] Linking static target lib/librte_cmdline.a 00:01:30.719 [90/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:30.719 [91/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:30.719 [92/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:30.719 [93/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:30.719 [94/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:30.719 [95/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:30.719 [96/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:30.719 [97/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:30.719 [98/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:30.719 [99/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:30.719 [100/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:30.719 [101/707] Linking static target lib/librte_net.a 00:01:30.719 [102/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:30.719 [103/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:30.719 [104/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:30.719 [105/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:30.719 [106/707] Linking static target lib/librte_metrics.a 00:01:30.719 [107/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:30.719 [108/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.719 [109/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:30.981 [110/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.981 [111/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:30.981 [112/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:30.981 [113/707] Linking target lib/librte_log.so.24.0 00:01:30.981 [114/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:30.981 [115/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:30.981 [116/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:30.981 [117/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.981 [118/707] Linking static target lib/librte_cfgfile.a 00:01:30.981 [119/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:30.981 [120/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:30.981 [121/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:30.981 [122/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:30.981 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:30.981 [124/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:30.981 [125/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:30.981 [126/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:30.981 [127/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:30.981 [128/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:30.981 [129/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:30.981 [130/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:30.981 [131/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:30.981 [132/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:30.981 [133/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.981 [134/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:30.981 [135/707] Linking static target lib/librte_mempool.a 00:01:31.243 [136/707] Linking target lib/librte_kvargs.so.24.0 00:01:31.244 [137/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:31.244 [138/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:31.244 [139/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:31.244 [140/707] Linking static target lib/librte_bitratestats.a 00:01:31.244 [141/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:31.244 [142/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:31.244 [143/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:31.244 [144/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:31.244 [145/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:31.244 [146/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:31.244 [147/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:31.244 [148/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:31.244 [149/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:31.244 [150/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:31.244 [151/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:31.244 [152/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:31.244 [153/707] Linking static target lib/librte_timer.a 00:01:31.244 [154/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:31.244 [155/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:31.244 [156/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:31.244 [157/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.244 [158/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:31.244 [159/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:31.244 [160/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.506 [161/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:31.506 [162/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:31.506 [163/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:31.506 [164/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:31.506 [165/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:31.506 [166/707] Linking static target lib/librte_compressdev.a 00:01:31.506 [167/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:31.506 [168/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:31.506 [169/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:31.506 [170/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:31.506 [171/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.506 [172/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:31.506 [173/707] Linking static target lib/librte_jobstats.a 00:01:31.506 [174/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:31.506 [175/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:31.506 [176/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:31.506 [177/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:31.506 [178/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:31.506 [179/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:31.506 [180/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:31.506 [181/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:31.506 [182/707] Linking static target lib/librte_dispatcher.a 00:01:31.506 [183/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:31.506 [184/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:31.506 [185/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:31.506 [186/707] Linking static target lib/librte_telemetry.a 00:01:31.506 [187/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:31.506 [188/707] Linking static target lib/librte_bbdev.a 00:01:31.506 [189/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:31.506 [190/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:31.506 [191/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:31.506 [192/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:31.506 [193/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:31.765 [194/707] Linking static target lib/librte_rcu.a 00:01:31.765 [195/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:31.765 [196/707] Linking static target lib/librte_eal.a 00:01:31.765 [197/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:31.765 [198/707] Linking static target lib/librte_gro.a 00:01:31.765 [199/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:31.765 [200/707] Linking static target lib/librte_gpudev.a 00:01:31.765 [201/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:31.765 [202/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:31.765 [203/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:31.765 [204/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:31.765 [205/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:31.765 [206/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:31.765 [207/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:31.765 [208/707] Linking static target lib/librte_latencystats.a 00:01:31.765 [209/707] Linking static target lib/librte_dmadev.a 00:01:31.765 [210/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:31.765 [211/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:31.765 [212/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:31.765 [213/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:31.765 [214/707] Linking static target lib/librte_gso.a 00:01:31.765 [215/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:31.765 [216/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:31.765 [217/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:31.765 [218/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.765 [219/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:31.765 [220/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:31.765 [221/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.765 [222/707] Linking static target lib/librte_distributor.a 00:01:31.765 [223/707] Linking static target lib/librte_mbuf.a 00:01:31.765 [224/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:31.765 [225/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:31.765 [226/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:31.765 [227/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:31.765 [228/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:31.765 [229/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:31.765 [230/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:32.030 [231/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:32.030 [232/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:32.030 [233/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.030 [234/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:32.030 [235/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:32.030 [236/707] Linking static target lib/librte_ip_frag.a 00:01:32.030 [237/707] Linking static target lib/librte_stack.a 00:01:32.030 [238/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:32.030 [239/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:32.030 [240/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.030 [241/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:32.030 [242/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:32.030 [243/707] Linking static target lib/librte_regexdev.a 00:01:32.030 [244/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.030 [245/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:32.030 [246/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.030 [247/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.030 [248/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.030 [249/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:32.030 [250/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:32.030 [251/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:32.030 [252/707] Linking static target lib/librte_mldev.a 00:01:32.030 [253/707] Linking static target lib/librte_rawdev.a 00:01:32.030 [254/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:32.030 [255/707] Linking static target lib/librte_power.a 00:01:32.030 [256/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.030 [257/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:32.030 [258/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:32.293 [259/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:32.293 [260/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:32.293 [261/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.293 [262/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:32.293 [263/707] Linking static target lib/librte_pcapng.a 00:01:32.293 [264/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.293 [265/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:32.293 [266/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.293 [267/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.293 [268/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:32.293 [269/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:32.293 [270/707] Linking static target lib/librte_bpf.a 00:01:32.293 [271/707] Linking target lib/librte_telemetry.so.24.0 00:01:32.293 [272/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:32.293 [273/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:32.293 [274/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.293 [275/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:32.293 [276/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:32.293 [277/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:32.293 [278/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:32.293 [279/707] Linking static target lib/librte_reorder.a 00:01:32.293 [280/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:32.293 [281/707] Linking static target lib/librte_security.a 00:01:32.293 [282/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.293 [283/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:32.293 [284/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:32.293 [285/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:32.555 [286/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:32.555 [287/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:32.555 [288/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:32.555 [289/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.555 [290/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:32.555 [291/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:32.555 [292/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:32.555 [293/707] Linking static target lib/librte_lpm.a 00:01:32.555 [294/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:32.555 [295/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:32.555 [296/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.555 [297/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:32.555 [298/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:32.555 [299/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.555 [300/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:32.555 [301/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:32.555 [302/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.555 [303/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:32.555 [304/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:32.555 [305/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:32.555 [306/707] Linking static target lib/librte_rib.a 00:01:32.555 [307/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:32.820 [308/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:32.820 [309/707] Linking static target lib/librte_efd.a 00:01:32.820 [310/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:32.820 [311/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:32.820 [312/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.820 [313/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:32.820 [314/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:32.820 [315/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:32.820 [316/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.820 [317/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:32.820 [318/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:32.820 [319/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:32.820 [320/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:32.820 [321/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:32.820 [322/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:32.820 [323/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:32.820 [324/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:32.820 [325/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.082 [326/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:33.082 [327/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:33.082 [328/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:33.082 [329/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:33.082 [330/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.082 [331/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:33.082 [332/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.082 [333/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:33.082 [334/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:33.082 [335/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:33.082 [336/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:33.082 [337/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.082 [338/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:33.082 [339/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:33.082 [340/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:33.082 [341/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.082 [342/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:33.082 [343/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.082 [344/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:33.082 [345/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:33.082 [346/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:33.082 [347/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:33.082 [348/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:33.082 [349/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:33.082 [350/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:33.347 [351/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:33.347 [352/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:33.347 [353/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:33.347 [354/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:33.347 [355/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:33.347 [356/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.347 [357/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:33.347 [358/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:33.347 [359/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:33.347 [360/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:33.347 [361/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:33.347 [362/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:33.347 [363/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:33.347 [364/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:33.347 [365/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:33.347 [366/707] Linking static target lib/librte_fib.a 00:01:33.608 [367/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:33.608 [368/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:33.608 [369/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:33.608 [370/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:33.608 [371/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:33.608 [372/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:33.608 [373/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:33.608 [374/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:33.608 [375/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:33.608 [376/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:33.608 [377/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:33.608 [378/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:33.608 [379/707] Linking static target lib/librte_pdump.a 00:01:33.608 [380/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:33.608 [381/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:33.608 [382/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:33.608 [383/707] Linking static target lib/librte_graph.a 00:01:33.608 [384/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:33.608 [385/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:33.867 [386/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:33.867 [387/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:33.867 [388/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:33.867 [389/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:33.867 [390/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:33.867 [391/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:33.867 [392/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:33.867 [393/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:33.867 [394/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:33.867 [395/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:33.867 [396/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:33.867 [397/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:33.867 [398/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:33.867 [399/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:33.867 [400/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:33.867 [401/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.867 [402/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:33.867 [403/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:33.867 [404/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.867 [405/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:33.867 [406/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:33.867 [407/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:33.867 [408/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:34.134 [409/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:34.134 [410/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:34.134 [411/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:34.134 [412/707] Linking static target lib/librte_sched.a 00:01:34.134 [413/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:34.134 [414/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:34.134 [415/707] Linking static target lib/librte_cryptodev.a 00:01:34.134 [416/707] Linking static target drivers/librte_bus_vdev.a 00:01:34.134 [417/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.134 [418/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:34.134 [419/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:34.134 [420/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:34.134 [421/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:34.134 [422/707] Linking static target lib/librte_table.a 00:01:34.134 [423/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:34.134 [424/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:34.134 [425/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:34.134 [426/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:34.134 [427/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:34.134 [428/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:34.134 [429/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:34.134 [430/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:34.134 [431/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:34.134 [432/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:34.134 [433/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:34.134 [434/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:34.397 [435/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:34.397 [436/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:34.397 [437/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:34.397 [438/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:34.397 [439/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:34.397 [440/707] Linking static target drivers/librte_bus_pci.a 00:01:34.397 [441/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:34.397 [442/707] Linking static target lib/librte_member.a 00:01:34.397 [443/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:34.397 [444/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:34.397 [445/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:34.397 [446/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:34.397 [447/707] Linking static target lib/librte_ipsec.a 00:01:34.397 [448/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.397 [449/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:34.397 [450/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:34.397 [451/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:34.659 [452/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:34.659 [453/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:34.659 [454/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:34.659 [455/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:34.659 [456/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:34.659 [457/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:34.659 [458/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:34.659 [459/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.659 [460/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.659 [461/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:34.659 [462/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:34.659 [463/707] Linking static target lib/librte_node.a 00:01:34.660 [464/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:34.660 [465/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:34.660 [466/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:34.660 [467/707] Linking static target lib/acl/libavx2_tmp.a 00:01:34.660 [468/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:34.660 [469/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:34.660 [470/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:34.660 [471/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:34.660 [472/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:34.660 [473/707] Linking static target lib/librte_pdcp.a 00:01:34.660 [474/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:34.660 [475/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:34.660 [476/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:34.660 [477/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:34.660 [478/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:34.660 [479/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:34.923 [480/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.923 [481/707] Linking static target lib/librte_hash.a 00:01:34.923 [482/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:34.923 [483/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:34.923 [484/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:34.923 [485/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:34.923 [486/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:34.923 [487/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:34.923 [488/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:34.923 [489/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:34.923 [490/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.923 [491/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:34.923 [492/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:34.923 [493/707] Linking static target drivers/librte_mempool_ring.a 00:01:34.923 [494/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:34.923 [495/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:34.923 [496/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:34.923 [497/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:34.923 [498/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:34.923 [499/707] Linking static target lib/librte_port.a 00:01:34.923 [500/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:34.923 [501/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:34.923 [502/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:34.923 [503/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:34.923 [504/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:34.923 [505/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:34.923 [506/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:34.923 [507/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:34.923 [508/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:34.923 [509/707] Linking static target lib/librte_eventdev.a 00:01:34.923 [510/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:34.923 [511/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:34.923 [512/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.923 [513/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:34.923 [514/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:35.181 [515/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.181 [516/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.181 [517/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.181 [518/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:35.181 [519/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:35.181 [520/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:35.181 [521/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:35.181 [522/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:35.181 [523/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:35.181 [524/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:35.181 [525/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:35.181 [526/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:35.181 [527/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:35.181 [528/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:35.181 [529/707] Linking static target lib/librte_acl.a 00:01:35.181 [530/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:35.181 [531/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:35.181 [532/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:35.439 [533/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:35.439 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:35.439 [535/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.439 [536/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:35.439 [537/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:35.439 [538/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:35.439 [539/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:35.439 [540/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:35.439 [541/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:35.439 [542/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:35.439 [543/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.439 [544/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:35.439 [545/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:35.439 [546/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:35.439 [547/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:35.439 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:35.439 [549/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:35.439 [550/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:35.439 [551/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:35.439 [552/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.439 [553/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:35.439 [554/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.698 [555/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:35.698 [556/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:35.698 [557/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:35.698 [558/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:35.698 [559/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:35.698 [560/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:35.698 [561/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:35.698 [562/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:35.698 [563/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:35.698 [564/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:35.698 [565/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:35.957 [566/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:35.957 [567/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:35.957 [568/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:35.957 [569/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:35.957 [570/707] Linking static target lib/librte_ethdev.a 00:01:35.957 [571/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:36.217 [572/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:36.217 [573/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:36.217 [574/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:36.475 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:36.735 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:36.994 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:36.995 [578/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:37.254 [579/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:37.513 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:37.772 [581/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.772 [582/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:37.772 [583/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:38.030 [584/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:38.030 [585/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:38.030 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:38.030 [587/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:38.289 [588/707] Linking static target drivers/librte_net_i40e.a 00:01:38.289 [589/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:39.225 [590/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.225 [591/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:39.791 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:41.168 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.168 [594/707] Linking target lib/librte_eal.so.24.0 00:01:41.429 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:41.429 [596/707] Linking target lib/librte_pci.so.24.0 00:01:41.429 [597/707] Linking target lib/librte_timer.so.24.0 00:01:41.429 [598/707] Linking target lib/librte_dmadev.so.24.0 00:01:41.429 [599/707] Linking target lib/librte_jobstats.so.24.0 00:01:41.429 [600/707] Linking target lib/librte_ring.so.24.0 00:01:41.429 [601/707] Linking target lib/librte_meter.so.24.0 00:01:41.429 [602/707] Linking target lib/librte_stack.so.24.0 00:01:41.429 [603/707] Linking target lib/librte_cfgfile.so.24.0 00:01:41.429 [604/707] Linking target lib/librte_rawdev.so.24.0 00:01:41.429 [605/707] Linking target drivers/librte_bus_vdev.so.24.0 00:01:41.429 [606/707] Linking target lib/librte_acl.so.24.0 00:01:41.429 [607/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:41.429 [608/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:41.429 [609/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:41.429 [610/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:41.430 [611/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:41.430 [612/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:41.430 [613/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:41.430 [614/707] Linking target drivers/librte_bus_pci.so.24.0 00:01:41.689 [615/707] Linking target lib/librte_mempool.so.24.0 00:01:41.689 [616/707] Linking target lib/librte_rcu.so.24.0 00:01:41.689 [617/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:41.689 [618/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:41.689 [619/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:41.689 [620/707] Linking target drivers/librte_mempool_ring.so.24.0 00:01:41.689 [621/707] Linking target lib/librte_mbuf.so.24.0 00:01:41.689 [622/707] Linking target lib/librte_rib.so.24.0 00:01:41.946 [623/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:41.946 [624/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:41.946 [625/707] Linking target lib/librte_reorder.so.24.0 00:01:41.946 [626/707] Linking target lib/librte_compressdev.so.24.0 00:01:41.946 [627/707] Linking target lib/librte_gpudev.so.24.0 00:01:41.946 [628/707] Linking target lib/librte_distributor.so.24.0 00:01:41.946 [629/707] Linking target lib/librte_regexdev.so.24.0 00:01:41.946 [630/707] Linking target lib/librte_bbdev.so.24.0 00:01:41.946 [631/707] Linking target lib/librte_net.so.24.0 00:01:41.946 [632/707] Linking target lib/librte_mldev.so.24.0 00:01:41.946 [633/707] Linking target lib/librte_sched.so.24.0 00:01:41.946 [634/707] Linking target lib/librte_cryptodev.so.24.0 00:01:41.946 [635/707] Linking target lib/librte_fib.so.24.0 00:01:41.946 [636/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:41.946 [637/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:41.946 [638/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:41.946 [639/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:42.205 [640/707] Linking target lib/librte_hash.so.24.0 00:01:42.205 [641/707] Linking target lib/librte_cmdline.so.24.0 00:01:42.205 [642/707] Linking target lib/librte_security.so.24.0 00:01:42.205 [643/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:42.205 [644/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:42.205 [645/707] Linking target lib/librte_efd.so.24.0 00:01:42.205 [646/707] Linking target lib/librte_lpm.so.24.0 00:01:42.205 [647/707] Linking target lib/librte_member.so.24.0 00:01:42.205 [648/707] Linking target lib/librte_pdcp.so.24.0 00:01:42.205 [649/707] Linking target lib/librte_ipsec.so.24.0 00:01:42.463 [650/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:42.463 [651/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:43.398 [652/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.398 [653/707] Linking target lib/librte_ethdev.so.24.0 00:01:43.398 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:43.398 [655/707] Linking target lib/librte_metrics.so.24.0 00:01:43.398 [656/707] Linking target lib/librte_bpf.so.24.0 00:01:43.398 [657/707] Linking target lib/librte_gso.so.24.0 00:01:43.398 [658/707] Linking target lib/librte_pcapng.so.24.0 00:01:43.398 [659/707] Linking target lib/librte_ip_frag.so.24.0 00:01:43.398 [660/707] Linking target lib/librte_gro.so.24.0 00:01:43.398 [661/707] Linking target lib/librte_power.so.24.0 00:01:43.398 [662/707] Linking target lib/librte_eventdev.so.24.0 00:01:43.398 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:01:43.656 [664/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:43.656 [665/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:43.656 [666/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:43.656 [667/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:43.656 [668/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:43.656 [669/707] Linking target lib/librte_bitratestats.so.24.0 00:01:43.656 [670/707] Linking target lib/librte_pdump.so.24.0 00:01:43.656 [671/707] Linking target lib/librte_latencystats.so.24.0 00:01:43.656 [672/707] Linking target lib/librte_graph.so.24.0 00:01:43.656 [673/707] Linking target lib/librte_dispatcher.so.24.0 00:01:43.656 [674/707] Linking target lib/librte_port.so.24.0 00:01:43.912 [675/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:43.912 [676/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:43.912 [677/707] Linking target lib/librte_node.so.24.0 00:01:43.912 [678/707] Linking target lib/librte_table.so.24.0 00:01:43.912 [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:46.446 [680/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:46.446 [681/707] Linking static target lib/librte_pipeline.a 00:01:46.446 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:46.446 [683/707] Linking static target lib/librte_vhost.a 00:01:46.704 [684/707] Linking target app/dpdk-test-acl 00:01:46.704 [685/707] Linking target app/dpdk-pdump 00:01:46.704 [686/707] Linking target app/dpdk-test-cmdline 00:01:46.704 [687/707] Linking target app/dpdk-test-fib 00:01:46.704 [688/707] Linking target app/dpdk-test-dma-perf 00:01:46.704 [689/707] Linking target app/dpdk-graph 00:01:46.704 [690/707] Linking target app/dpdk-test-sad 00:01:46.704 [691/707] Linking target app/dpdk-test-compress-perf 00:01:46.704 [692/707] Linking target app/dpdk-test-flow-perf 00:01:46.704 [693/707] Linking target app/dpdk-test-security-perf 00:01:46.704 [694/707] Linking target app/dpdk-test-bbdev 00:01:46.704 [695/707] Linking target app/dpdk-test-gpudev 00:01:46.704 [696/707] Linking target app/dpdk-test-regex 00:01:46.704 [697/707] Linking target app/dpdk-dumpcap 00:01:46.704 [698/707] Linking target app/dpdk-proc-info 00:01:46.704 [699/707] Linking target app/dpdk-test-pipeline 00:01:46.704 [700/707] Linking target app/dpdk-test-mldev 00:01:46.704 [701/707] Linking target app/dpdk-test-crypto-perf 00:01:46.704 [702/707] Linking target app/dpdk-test-eventdev 00:01:46.704 [703/707] Linking target app/dpdk-testpmd 00:01:48.082 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.082 [705/707] Linking target lib/librte_vhost.so.24.0 00:01:51.372 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.372 [707/707] Linking target lib/librte_pipeline.so.24.0 00:01:51.372 00:27:02 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:01:51.372 00:27:02 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:51.373 00:27:02 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:01:51.373 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:51.373 [0/1] Installing files. 00:01:51.373 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:51.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:51.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.376 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.377 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:51.378 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:51.378 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.378 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.379 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.379 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.379 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.379 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.379 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.379 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.379 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.379 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.379 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.379 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.379 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.379 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.379 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.379 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.379 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:51.641 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:51.641 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:51.641 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.641 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:51.641 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:51.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:51.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:51.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:51.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:51.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:51.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:51.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:51.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:51.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:51.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.643 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:51.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:51.645 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:51.645 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:51.645 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:51.645 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:51.645 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:51.645 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:51.645 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:51.645 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:51.645 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:51.645 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:51.645 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:51.645 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:51.645 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:51.645 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:51.645 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:51.645 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:51.645 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:51.645 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:51.645 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:51.645 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:51.645 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:51.645 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:51.645 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:51.645 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:51.645 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:51.645 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:51.645 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:51.645 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:51.645 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:51.645 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:51.645 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:51.645 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:51.645 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:51.645 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:51.646 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:51.646 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:51.646 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:51.646 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:51.646 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:51.646 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:51.646 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:51.646 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:51.646 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:51.646 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:51.646 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:51.646 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:51.646 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:51.646 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:51.646 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:51.646 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:51.646 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:51.646 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:51.646 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:51.646 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:51.646 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:51.646 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:51.646 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:51.646 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:51.646 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:51.646 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:51.646 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:51.646 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:51.646 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:51.646 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:51.646 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:51.646 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:51.646 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:51.646 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:51.646 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:51.646 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:51.646 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:51.646 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:51.646 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:51.646 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:51.646 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:51.646 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:51.646 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:01:51.646 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:01:51.646 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:01:51.646 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:01:51.646 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:01:51.646 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:01:51.646 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:01:51.646 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:01:51.646 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:01:51.646 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:01:51.646 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:01:51.646 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:01:51.646 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:51.646 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:51.646 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:51.646 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:51.646 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:51.646 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:51.646 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:51.646 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:51.646 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:51.646 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:51.646 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:51.646 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:51.646 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:51.646 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:51.646 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:51.646 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:51.646 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:51.646 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:51.646 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:51.646 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:51.646 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:51.646 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:51.646 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:51.646 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:51.646 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:51.646 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:51.646 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:51.646 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:51.646 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:51.646 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:51.646 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:51.646 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:51.646 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:51.646 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:51.646 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:51.646 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:51.646 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:01:51.646 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:01:51.646 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:01:51.646 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:01:51.646 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:01:51.646 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:01:51.646 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:01:51.646 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:01:51.646 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:01:51.647 00:27:03 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:01:51.647 00:27:03 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:51.647 00:01:51.647 real 0m27.717s 00:01:51.647 user 8m26.920s 00:01:51.647 sys 1m56.889s 00:01:51.647 00:27:03 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:51.647 00:27:03 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:01:51.647 ************************************ 00:01:51.647 END TEST build_native_dpdk 00:01:51.647 ************************************ 00:01:51.647 00:27:03 -- common/autotest_common.sh@1142 -- $ return 0 00:01:51.647 00:27:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:51.647 00:27:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:51.647 00:27:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:51.647 00:27:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:51.647 00:27:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:51.647 00:27:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:51.647 00:27:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:51.647 00:27:03 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:51.905 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:51.905 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:51.905 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:51.905 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:52.474 Using 'verbs' RDMA provider 00:02:05.295 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:17.530 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:17.530 Creating mk/config.mk...done. 00:02:17.530 Creating mk/cc.flags.mk...done. 00:02:17.530 Type 'make' to build. 00:02:17.530 00:27:28 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:02:17.530 00:27:28 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:17.530 00:27:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:17.530 00:27:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.530 ************************************ 00:02:17.530 START TEST make 00:02:17.530 ************************************ 00:02:17.530 00:27:28 make -- common/autotest_common.sh@1123 -- $ make -j96 00:02:17.530 make[1]: Nothing to be done for 'all'. 00:02:18.920 The Meson build system 00:02:18.920 Version: 1.3.1 00:02:18.920 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:18.920 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:18.920 Build type: native build 00:02:18.920 Project name: libvfio-user 00:02:18.920 Project version: 0.0.1 00:02:18.920 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:18.920 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:18.920 Host machine cpu family: x86_64 00:02:18.920 Host machine cpu: x86_64 00:02:18.920 Run-time dependency threads found: YES 00:02:18.920 Library dl found: YES 00:02:18.920 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:18.920 Run-time dependency json-c found: YES 0.17 00:02:18.920 Run-time dependency cmocka found: YES 1.1.7 00:02:18.920 Program pytest-3 found: NO 00:02:18.920 Program flake8 found: NO 00:02:18.920 Program misspell-fixer found: NO 00:02:18.920 Program restructuredtext-lint found: NO 00:02:18.920 Program valgrind found: YES (/usr/bin/valgrind) 00:02:18.920 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:18.920 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:18.920 Compiler for C supports arguments -Wwrite-strings: YES 00:02:18.920 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:18.920 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:18.920 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:18.920 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:18.920 Build targets in project: 8 00:02:18.920 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:18.920 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:18.920 00:02:18.920 libvfio-user 0.0.1 00:02:18.920 00:02:18.920 User defined options 00:02:18.920 buildtype : debug 00:02:18.920 default_library: shared 00:02:18.920 libdir : /usr/local/lib 00:02:18.920 00:02:18.920 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.178 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:19.178 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:19.178 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:19.178 [3/37] Compiling C object samples/null.p/null.c.o 00:02:19.178 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:19.178 [5/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:19.178 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:19.178 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:19.178 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:19.178 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:19.178 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:19.179 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:19.179 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:19.179 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:19.179 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:19.179 [15/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:19.179 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:19.179 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:19.179 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:19.179 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:19.179 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:19.437 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:19.437 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:19.437 [23/37] Compiling C object samples/server.p/server.c.o 00:02:19.437 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:19.437 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:19.437 [26/37] Compiling C object samples/client.p/client.c.o 00:02:19.437 [27/37] Linking target samples/client 00:02:19.437 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:19.437 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:19.437 [30/37] Linking target test/unit_tests 00:02:19.437 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:19.696 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:19.696 [33/37] Linking target samples/shadow_ioeventfd_server 00:02:19.696 [34/37] Linking target samples/server 00:02:19.696 [35/37] Linking target samples/lspci 00:02:19.696 [36/37] Linking target samples/null 00:02:19.696 [37/37] Linking target samples/gpio-pci-idio-16 00:02:19.696 INFO: autodetecting backend as ninja 00:02:19.696 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:19.696 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:19.955 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:19.955 ninja: no work to do. 00:02:28.072 CC lib/ut_mock/mock.o 00:02:28.072 CC lib/log/log.o 00:02:28.072 CC lib/log/log_flags.o 00:02:28.072 CC lib/ut/ut.o 00:02:28.072 CC lib/log/log_deprecated.o 00:02:28.072 LIB libspdk_ut_mock.a 00:02:28.072 LIB libspdk_log.a 00:02:28.072 LIB libspdk_ut.a 00:02:28.072 SO libspdk_log.so.7.0 00:02:28.072 SO libspdk_ut_mock.so.6.0 00:02:28.072 SO libspdk_ut.so.2.0 00:02:28.072 SYMLINK libspdk_ut_mock.so 00:02:28.072 SYMLINK libspdk_log.so 00:02:28.072 SYMLINK libspdk_ut.so 00:02:28.331 CC lib/dma/dma.o 00:02:28.331 CXX lib/trace_parser/trace.o 00:02:28.331 CC lib/util/base64.o 00:02:28.331 CC lib/ioat/ioat.o 00:02:28.331 CC lib/util/bit_array.o 00:02:28.331 CC lib/util/cpuset.o 00:02:28.331 CC lib/util/crc16.o 00:02:28.331 CC lib/util/crc32.o 00:02:28.331 CC lib/util/crc32c.o 00:02:28.331 CC lib/util/crc32_ieee.o 00:02:28.331 CC lib/util/crc64.o 00:02:28.331 CC lib/util/dif.o 00:02:28.331 CC lib/util/fd.o 00:02:28.331 CC lib/util/file.o 00:02:28.331 CC lib/util/hexlify.o 00:02:28.331 CC lib/util/iov.o 00:02:28.331 CC lib/util/math.o 00:02:28.331 CC lib/util/pipe.o 00:02:28.331 CC lib/util/strerror_tls.o 00:02:28.331 CC lib/util/string.o 00:02:28.331 CC lib/util/uuid.o 00:02:28.331 CC lib/util/fd_group.o 00:02:28.331 CC lib/util/xor.o 00:02:28.331 CC lib/util/zipf.o 00:02:28.589 CC lib/vfio_user/host/vfio_user_pci.o 00:02:28.589 CC lib/vfio_user/host/vfio_user.o 00:02:28.589 LIB libspdk_dma.a 00:02:28.589 SO libspdk_dma.so.4.0 00:02:28.589 LIB libspdk_ioat.a 00:02:28.847 SYMLINK libspdk_dma.so 00:02:28.847 SO libspdk_ioat.so.7.0 00:02:28.847 SYMLINK libspdk_ioat.so 00:02:28.847 LIB libspdk_vfio_user.a 00:02:28.847 SO libspdk_vfio_user.so.5.0 00:02:28.847 LIB libspdk_util.a 00:02:28.847 SYMLINK libspdk_vfio_user.so 00:02:28.847 SO libspdk_util.so.9.1 00:02:29.106 SYMLINK libspdk_util.so 00:02:29.106 LIB libspdk_trace_parser.a 00:02:29.106 SO libspdk_trace_parser.so.5.0 00:02:29.365 SYMLINK libspdk_trace_parser.so 00:02:29.365 CC lib/vmd/vmd.o 00:02:29.365 CC lib/vmd/led.o 00:02:29.365 CC lib/json/json_parse.o 00:02:29.365 CC lib/json/json_util.o 00:02:29.365 CC lib/json/json_write.o 00:02:29.365 CC lib/rdma_provider/common.o 00:02:29.365 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:29.365 CC lib/idxd/idxd.o 00:02:29.365 CC lib/rdma_utils/rdma_utils.o 00:02:29.365 CC lib/conf/conf.o 00:02:29.365 CC lib/idxd/idxd_user.o 00:02:29.365 CC lib/idxd/idxd_kernel.o 00:02:29.365 CC lib/env_dpdk/env.o 00:02:29.365 CC lib/env_dpdk/memory.o 00:02:29.365 CC lib/env_dpdk/pci.o 00:02:29.365 CC lib/env_dpdk/init.o 00:02:29.365 CC lib/env_dpdk/threads.o 00:02:29.365 CC lib/env_dpdk/pci_ioat.o 00:02:29.365 CC lib/env_dpdk/pci_virtio.o 00:02:29.365 CC lib/env_dpdk/pci_vmd.o 00:02:29.365 CC lib/env_dpdk/pci_idxd.o 00:02:29.365 CC lib/env_dpdk/pci_event.o 00:02:29.365 CC lib/env_dpdk/sigbus_handler.o 00:02:29.365 CC lib/env_dpdk/pci_dpdk.o 00:02:29.365 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:29.365 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:29.624 LIB libspdk_rdma_provider.a 00:02:29.624 SO libspdk_rdma_provider.so.6.0 00:02:29.624 LIB libspdk_rdma_utils.a 00:02:29.624 LIB libspdk_conf.a 00:02:29.624 SO libspdk_rdma_utils.so.1.0 00:02:29.624 LIB libspdk_json.a 00:02:29.624 SO libspdk_conf.so.6.0 00:02:29.624 SYMLINK libspdk_rdma_provider.so 00:02:29.624 SO libspdk_json.so.6.0 00:02:29.624 SYMLINK libspdk_rdma_utils.so 00:02:29.624 SYMLINK libspdk_conf.so 00:02:29.882 SYMLINK libspdk_json.so 00:02:29.882 LIB libspdk_idxd.a 00:02:29.882 SO libspdk_idxd.so.12.0 00:02:29.882 LIB libspdk_vmd.a 00:02:29.882 SO libspdk_vmd.so.6.0 00:02:29.882 SYMLINK libspdk_idxd.so 00:02:29.882 SYMLINK libspdk_vmd.so 00:02:30.141 CC lib/jsonrpc/jsonrpc_server.o 00:02:30.141 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:30.141 CC lib/jsonrpc/jsonrpc_client.o 00:02:30.141 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:30.400 LIB libspdk_jsonrpc.a 00:02:30.400 SO libspdk_jsonrpc.so.6.0 00:02:30.400 SYMLINK libspdk_jsonrpc.so 00:02:30.400 LIB libspdk_env_dpdk.a 00:02:30.400 SO libspdk_env_dpdk.so.14.1 00:02:30.660 SYMLINK libspdk_env_dpdk.so 00:02:30.660 CC lib/rpc/rpc.o 00:02:30.919 LIB libspdk_rpc.a 00:02:30.919 SO libspdk_rpc.so.6.0 00:02:30.919 SYMLINK libspdk_rpc.so 00:02:31.179 CC lib/trace/trace.o 00:02:31.179 CC lib/trace/trace_flags.o 00:02:31.179 CC lib/trace/trace_rpc.o 00:02:31.179 CC lib/keyring/keyring.o 00:02:31.179 CC lib/keyring/keyring_rpc.o 00:02:31.179 CC lib/notify/notify.o 00:02:31.179 CC lib/notify/notify_rpc.o 00:02:31.438 LIB libspdk_notify.a 00:02:31.438 LIB libspdk_keyring.a 00:02:31.438 SO libspdk_notify.so.6.0 00:02:31.438 LIB libspdk_trace.a 00:02:31.438 SO libspdk_keyring.so.1.0 00:02:31.438 SO libspdk_trace.so.10.0 00:02:31.438 SYMLINK libspdk_notify.so 00:02:31.697 SYMLINK libspdk_keyring.so 00:02:31.697 SYMLINK libspdk_trace.so 00:02:31.956 CC lib/thread/thread.o 00:02:31.956 CC lib/thread/iobuf.o 00:02:31.956 CC lib/sock/sock.o 00:02:31.956 CC lib/sock/sock_rpc.o 00:02:32.215 LIB libspdk_sock.a 00:02:32.215 SO libspdk_sock.so.10.0 00:02:32.215 SYMLINK libspdk_sock.so 00:02:32.782 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:32.782 CC lib/nvme/nvme_ctrlr.o 00:02:32.782 CC lib/nvme/nvme_fabric.o 00:02:32.782 CC lib/nvme/nvme_ns_cmd.o 00:02:32.782 CC lib/nvme/nvme_ns.o 00:02:32.782 CC lib/nvme/nvme_pcie_common.o 00:02:32.782 CC lib/nvme/nvme_pcie.o 00:02:32.782 CC lib/nvme/nvme_qpair.o 00:02:32.782 CC lib/nvme/nvme.o 00:02:32.782 CC lib/nvme/nvme_quirks.o 00:02:32.782 CC lib/nvme/nvme_transport.o 00:02:32.782 CC lib/nvme/nvme_discovery.o 00:02:32.782 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:32.782 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:32.782 CC lib/nvme/nvme_tcp.o 00:02:32.782 CC lib/nvme/nvme_opal.o 00:02:32.782 CC lib/nvme/nvme_io_msg.o 00:02:32.782 CC lib/nvme/nvme_poll_group.o 00:02:32.782 CC lib/nvme/nvme_zns.o 00:02:32.782 CC lib/nvme/nvme_stubs.o 00:02:32.782 CC lib/nvme/nvme_auth.o 00:02:32.782 CC lib/nvme/nvme_cuse.o 00:02:32.782 CC lib/nvme/nvme_vfio_user.o 00:02:32.782 CC lib/nvme/nvme_rdma.o 00:02:33.041 LIB libspdk_thread.a 00:02:33.041 SO libspdk_thread.so.10.1 00:02:33.041 SYMLINK libspdk_thread.so 00:02:33.300 CC lib/vfu_tgt/tgt_rpc.o 00:02:33.300 CC lib/vfu_tgt/tgt_endpoint.o 00:02:33.300 CC lib/init/json_config.o 00:02:33.300 CC lib/init/rpc.o 00:02:33.300 CC lib/init/subsystem.o 00:02:33.300 CC lib/init/subsystem_rpc.o 00:02:33.300 CC lib/blob/blobstore.o 00:02:33.300 CC lib/accel/accel.o 00:02:33.300 CC lib/blob/request.o 00:02:33.300 CC lib/accel/accel_rpc.o 00:02:33.300 CC lib/blob/zeroes.o 00:02:33.300 CC lib/blob/blob_bs_dev.o 00:02:33.300 CC lib/accel/accel_sw.o 00:02:33.300 CC lib/virtio/virtio_vhost_user.o 00:02:33.300 CC lib/virtio/virtio.o 00:02:33.300 CC lib/virtio/virtio_vfio_user.o 00:02:33.300 CC lib/virtio/virtio_pci.o 00:02:33.558 LIB libspdk_init.a 00:02:33.558 SO libspdk_init.so.5.0 00:02:33.558 LIB libspdk_vfu_tgt.a 00:02:33.558 LIB libspdk_virtio.a 00:02:33.558 SO libspdk_vfu_tgt.so.3.0 00:02:33.558 SYMLINK libspdk_init.so 00:02:33.558 SO libspdk_virtio.so.7.0 00:02:33.816 SYMLINK libspdk_vfu_tgt.so 00:02:33.816 SYMLINK libspdk_virtio.so 00:02:34.075 CC lib/event/app.o 00:02:34.075 CC lib/event/reactor.o 00:02:34.075 CC lib/event/log_rpc.o 00:02:34.075 CC lib/event/app_rpc.o 00:02:34.075 CC lib/event/scheduler_static.o 00:02:34.075 LIB libspdk_accel.a 00:02:34.075 SO libspdk_accel.so.15.1 00:02:34.075 SYMLINK libspdk_accel.so 00:02:34.075 LIB libspdk_nvme.a 00:02:34.333 LIB libspdk_event.a 00:02:34.333 SO libspdk_nvme.so.13.1 00:02:34.333 SO libspdk_event.so.14.0 00:02:34.333 SYMLINK libspdk_event.so 00:02:34.591 CC lib/bdev/bdev.o 00:02:34.591 CC lib/bdev/bdev_rpc.o 00:02:34.591 CC lib/bdev/bdev_zone.o 00:02:34.591 CC lib/bdev/part.o 00:02:34.591 CC lib/bdev/scsi_nvme.o 00:02:34.591 SYMLINK libspdk_nvme.so 00:02:35.527 LIB libspdk_blob.a 00:02:35.527 SO libspdk_blob.so.11.0 00:02:35.527 SYMLINK libspdk_blob.so 00:02:35.786 CC lib/blobfs/blobfs.o 00:02:35.786 CC lib/blobfs/tree.o 00:02:35.786 CC lib/lvol/lvol.o 00:02:36.355 LIB libspdk_bdev.a 00:02:36.355 SO libspdk_bdev.so.15.1 00:02:36.355 SYMLINK libspdk_bdev.so 00:02:36.355 LIB libspdk_blobfs.a 00:02:36.355 SO libspdk_blobfs.so.10.0 00:02:36.613 LIB libspdk_lvol.a 00:02:36.613 SYMLINK libspdk_blobfs.so 00:02:36.613 SO libspdk_lvol.so.10.0 00:02:36.613 SYMLINK libspdk_lvol.so 00:02:36.613 CC lib/nbd/nbd.o 00:02:36.613 CC lib/nbd/nbd_rpc.o 00:02:36.613 CC lib/scsi/dev.o 00:02:36.613 CC lib/scsi/lun.o 00:02:36.613 CC lib/scsi/port.o 00:02:36.613 CC lib/scsi/scsi.o 00:02:36.613 CC lib/scsi/scsi_bdev.o 00:02:36.613 CC lib/scsi/scsi_pr.o 00:02:36.613 CC lib/nvmf/ctrlr.o 00:02:36.613 CC lib/scsi/scsi_rpc.o 00:02:36.613 CC lib/scsi/task.o 00:02:36.613 CC lib/nvmf/ctrlr_discovery.o 00:02:36.613 CC lib/ublk/ublk.o 00:02:36.613 CC lib/ftl/ftl_core.o 00:02:36.613 CC lib/nvmf/ctrlr_bdev.o 00:02:36.613 CC lib/ftl/ftl_init.o 00:02:36.613 CC lib/ublk/ublk_rpc.o 00:02:36.613 CC lib/nvmf/subsystem.o 00:02:36.613 CC lib/ftl/ftl_layout.o 00:02:36.613 CC lib/nvmf/nvmf.o 00:02:36.613 CC lib/ftl/ftl_debug.o 00:02:36.613 CC lib/nvmf/nvmf_rpc.o 00:02:36.613 CC lib/ftl/ftl_io.o 00:02:36.613 CC lib/nvmf/transport.o 00:02:36.613 CC lib/ftl/ftl_sb.o 00:02:36.613 CC lib/nvmf/tcp.o 00:02:36.613 CC lib/nvmf/stubs.o 00:02:36.613 CC lib/ftl/ftl_l2p.o 00:02:36.613 CC lib/nvmf/mdns_server.o 00:02:36.613 CC lib/ftl/ftl_l2p_flat.o 00:02:36.613 CC lib/nvmf/vfio_user.o 00:02:36.613 CC lib/ftl/ftl_nv_cache.o 00:02:36.613 CC lib/nvmf/rdma.o 00:02:36.613 CC lib/ftl/ftl_band_ops.o 00:02:36.613 CC lib/ftl/ftl_band.o 00:02:36.613 CC lib/nvmf/auth.o 00:02:36.613 CC lib/ftl/ftl_writer.o 00:02:36.613 CC lib/ftl/ftl_rq.o 00:02:36.613 CC lib/ftl/ftl_reloc.o 00:02:36.613 CC lib/ftl/ftl_l2p_cache.o 00:02:36.613 CC lib/ftl/mngt/ftl_mngt.o 00:02:36.613 CC lib/ftl/ftl_p2l.o 00:02:36.613 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:36.613 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:36.613 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:36.613 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:36.613 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:36.613 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:36.613 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:36.613 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:36.613 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:36.613 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:36.613 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:36.613 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:36.613 CC lib/ftl/utils/ftl_conf.o 00:02:36.613 CC lib/ftl/utils/ftl_md.o 00:02:36.872 CC lib/ftl/utils/ftl_mempool.o 00:02:36.872 CC lib/ftl/utils/ftl_bitmap.o 00:02:36.872 CC lib/ftl/utils/ftl_property.o 00:02:36.872 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:36.872 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:36.872 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:36.872 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:36.872 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:36.872 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:36.872 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:36.872 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:36.872 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:36.872 CC lib/ftl/base/ftl_base_dev.o 00:02:36.872 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:36.872 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:36.872 CC lib/ftl/base/ftl_base_bdev.o 00:02:36.872 CC lib/ftl/ftl_trace.o 00:02:37.438 LIB libspdk_nbd.a 00:02:37.438 LIB libspdk_scsi.a 00:02:37.438 SO libspdk_nbd.so.7.0 00:02:37.438 SO libspdk_scsi.so.9.0 00:02:37.438 SYMLINK libspdk_nbd.so 00:02:37.438 LIB libspdk_ublk.a 00:02:37.438 SYMLINK libspdk_scsi.so 00:02:37.438 SO libspdk_ublk.so.3.0 00:02:37.438 SYMLINK libspdk_ublk.so 00:02:37.697 LIB libspdk_ftl.a 00:02:37.697 CC lib/vhost/vhost_rpc.o 00:02:37.697 CC lib/vhost/vhost.o 00:02:37.697 CC lib/vhost/vhost_scsi.o 00:02:37.697 CC lib/vhost/vhost_blk.o 00:02:37.697 CC lib/vhost/rte_vhost_user.o 00:02:37.697 CC lib/iscsi/conn.o 00:02:37.697 CC lib/iscsi/init_grp.o 00:02:37.697 CC lib/iscsi/iscsi.o 00:02:37.697 CC lib/iscsi/md5.o 00:02:37.697 CC lib/iscsi/param.o 00:02:37.697 CC lib/iscsi/portal_grp.o 00:02:37.697 CC lib/iscsi/tgt_node.o 00:02:37.697 CC lib/iscsi/iscsi_subsystem.o 00:02:37.697 CC lib/iscsi/task.o 00:02:37.697 CC lib/iscsi/iscsi_rpc.o 00:02:37.956 SO libspdk_ftl.so.9.0 00:02:38.214 SYMLINK libspdk_ftl.so 00:02:38.214 LIB libspdk_nvmf.a 00:02:38.472 SO libspdk_nvmf.so.18.1 00:02:38.472 LIB libspdk_vhost.a 00:02:38.472 SYMLINK libspdk_nvmf.so 00:02:38.730 SO libspdk_vhost.so.8.0 00:02:38.730 SYMLINK libspdk_vhost.so 00:02:38.730 LIB libspdk_iscsi.a 00:02:38.730 SO libspdk_iscsi.so.8.0 00:02:38.990 SYMLINK libspdk_iscsi.so 00:02:39.558 CC module/vfu_device/vfu_virtio.o 00:02:39.558 CC module/vfu_device/vfu_virtio_blk.o 00:02:39.558 CC module/vfu_device/vfu_virtio_rpc.o 00:02:39.558 CC module/vfu_device/vfu_virtio_scsi.o 00:02:39.558 CC module/env_dpdk/env_dpdk_rpc.o 00:02:39.558 CC module/blob/bdev/blob_bdev.o 00:02:39.558 LIB libspdk_env_dpdk_rpc.a 00:02:39.558 CC module/accel/dsa/accel_dsa.o 00:02:39.558 CC module/accel/dsa/accel_dsa_rpc.o 00:02:39.558 CC module/accel/iaa/accel_iaa.o 00:02:39.558 CC module/scheduler/gscheduler/gscheduler.o 00:02:39.558 CC module/accel/iaa/accel_iaa_rpc.o 00:02:39.558 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:39.558 CC module/keyring/file/keyring.o 00:02:39.558 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:39.558 CC module/keyring/file/keyring_rpc.o 00:02:39.558 CC module/accel/error/accel_error.o 00:02:39.558 CC module/accel/ioat/accel_ioat.o 00:02:39.558 CC module/sock/posix/posix.o 00:02:39.558 CC module/accel/error/accel_error_rpc.o 00:02:39.558 CC module/accel/ioat/accel_ioat_rpc.o 00:02:39.558 CC module/keyring/linux/keyring.o 00:02:39.558 CC module/keyring/linux/keyring_rpc.o 00:02:39.558 SO libspdk_env_dpdk_rpc.so.6.0 00:02:39.558 SYMLINK libspdk_env_dpdk_rpc.so 00:02:39.818 LIB libspdk_scheduler_gscheduler.a 00:02:39.818 LIB libspdk_scheduler_dpdk_governor.a 00:02:39.819 LIB libspdk_keyring_linux.a 00:02:39.819 LIB libspdk_keyring_file.a 00:02:39.819 LIB libspdk_accel_error.a 00:02:39.819 SO libspdk_scheduler_gscheduler.so.4.0 00:02:39.819 LIB libspdk_accel_ioat.a 00:02:39.819 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:39.819 SO libspdk_keyring_linux.so.1.0 00:02:39.819 LIB libspdk_scheduler_dynamic.a 00:02:39.819 LIB libspdk_accel_iaa.a 00:02:39.819 LIB libspdk_accel_dsa.a 00:02:39.819 SO libspdk_keyring_file.so.1.0 00:02:39.819 SO libspdk_accel_error.so.2.0 00:02:39.819 SO libspdk_scheduler_dynamic.so.4.0 00:02:39.819 SO libspdk_accel_ioat.so.6.0 00:02:39.819 LIB libspdk_blob_bdev.a 00:02:39.819 SO libspdk_accel_iaa.so.3.0 00:02:39.819 SYMLINK libspdk_scheduler_gscheduler.so 00:02:39.819 SO libspdk_accel_dsa.so.5.0 00:02:39.819 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:39.819 SYMLINK libspdk_keyring_linux.so 00:02:39.819 SO libspdk_blob_bdev.so.11.0 00:02:39.819 SYMLINK libspdk_keyring_file.so 00:02:39.819 SYMLINK libspdk_accel_error.so 00:02:39.819 SYMLINK libspdk_accel_iaa.so 00:02:39.819 SYMLINK libspdk_accel_ioat.so 00:02:39.819 SYMLINK libspdk_scheduler_dynamic.so 00:02:39.819 SYMLINK libspdk_accel_dsa.so 00:02:39.819 SYMLINK libspdk_blob_bdev.so 00:02:39.819 LIB libspdk_vfu_device.a 00:02:40.125 SO libspdk_vfu_device.so.3.0 00:02:40.125 SYMLINK libspdk_vfu_device.so 00:02:40.125 LIB libspdk_sock_posix.a 00:02:40.408 SO libspdk_sock_posix.so.6.0 00:02:40.408 SYMLINK libspdk_sock_posix.so 00:02:40.408 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:40.408 CC module/blobfs/bdev/blobfs_bdev.o 00:02:40.408 CC module/bdev/gpt/gpt.o 00:02:40.408 CC module/bdev/lvol/vbdev_lvol.o 00:02:40.408 CC module/bdev/gpt/vbdev_gpt.o 00:02:40.408 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:40.408 CC module/bdev/malloc/bdev_malloc.o 00:02:40.408 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:40.408 CC module/bdev/error/vbdev_error.o 00:02:40.408 CC module/bdev/nvme/bdev_nvme.o 00:02:40.408 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:40.408 CC module/bdev/error/vbdev_error_rpc.o 00:02:40.408 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:40.408 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:40.408 CC module/bdev/nvme/nvme_rpc.o 00:02:40.408 CC module/bdev/split/vbdev_split.o 00:02:40.408 CC module/bdev/nvme/bdev_mdns_client.o 00:02:40.408 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:40.408 CC module/bdev/split/vbdev_split_rpc.o 00:02:40.408 CC module/bdev/null/bdev_null.o 00:02:40.408 CC module/bdev/nvme/vbdev_opal.o 00:02:40.408 CC module/bdev/raid/bdev_raid.o 00:02:40.408 CC module/bdev/null/bdev_null_rpc.o 00:02:40.408 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:40.408 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:40.408 CC module/bdev/raid/bdev_raid_sb.o 00:02:40.408 CC module/bdev/aio/bdev_aio.o 00:02:40.408 CC module/bdev/raid/bdev_raid_rpc.o 00:02:40.408 CC module/bdev/aio/bdev_aio_rpc.o 00:02:40.408 CC module/bdev/delay/vbdev_delay.o 00:02:40.408 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:40.408 CC module/bdev/raid/raid0.o 00:02:40.408 CC module/bdev/passthru/vbdev_passthru.o 00:02:40.408 CC module/bdev/iscsi/bdev_iscsi.o 00:02:40.408 CC module/bdev/raid/raid1.o 00:02:40.408 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:40.408 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:40.408 CC module/bdev/raid/concat.o 00:02:40.408 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:40.408 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:40.408 CC module/bdev/ftl/bdev_ftl.o 00:02:40.408 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:40.667 LIB libspdk_blobfs_bdev.a 00:02:40.667 SO libspdk_blobfs_bdev.so.6.0 00:02:40.667 LIB libspdk_bdev_gpt.a 00:02:40.667 LIB libspdk_bdev_split.a 00:02:40.667 LIB libspdk_bdev_error.a 00:02:40.667 SO libspdk_bdev_split.so.6.0 00:02:40.667 SYMLINK libspdk_blobfs_bdev.so 00:02:40.667 SO libspdk_bdev_gpt.so.6.0 00:02:40.667 LIB libspdk_bdev_null.a 00:02:40.667 SO libspdk_bdev_error.so.6.0 00:02:40.667 LIB libspdk_bdev_passthru.a 00:02:40.667 LIB libspdk_bdev_ftl.a 00:02:40.667 SYMLINK libspdk_bdev_split.so 00:02:40.667 SO libspdk_bdev_null.so.6.0 00:02:40.667 SO libspdk_bdev_passthru.so.6.0 00:02:40.667 LIB libspdk_bdev_malloc.a 00:02:40.667 LIB libspdk_bdev_aio.a 00:02:40.667 SYMLINK libspdk_bdev_gpt.so 00:02:40.667 LIB libspdk_bdev_zone_block.a 00:02:40.667 SYMLINK libspdk_bdev_error.so 00:02:40.667 SO libspdk_bdev_ftl.so.6.0 00:02:40.667 SO libspdk_bdev_malloc.so.6.0 00:02:40.667 SO libspdk_bdev_zone_block.so.6.0 00:02:40.667 SO libspdk_bdev_aio.so.6.0 00:02:40.667 LIB libspdk_bdev_delay.a 00:02:40.667 SYMLINK libspdk_bdev_null.so 00:02:40.927 LIB libspdk_bdev_iscsi.a 00:02:40.927 SYMLINK libspdk_bdev_passthru.so 00:02:40.927 SO libspdk_bdev_delay.so.6.0 00:02:40.927 SO libspdk_bdev_iscsi.so.6.0 00:02:40.927 SYMLINK libspdk_bdev_ftl.so 00:02:40.927 SYMLINK libspdk_bdev_malloc.so 00:02:40.927 SYMLINK libspdk_bdev_zone_block.so 00:02:40.927 SYMLINK libspdk_bdev_aio.so 00:02:40.927 LIB libspdk_bdev_virtio.a 00:02:40.927 LIB libspdk_bdev_lvol.a 00:02:40.927 SO libspdk_bdev_virtio.so.6.0 00:02:40.927 SYMLINK libspdk_bdev_delay.so 00:02:40.927 SYMLINK libspdk_bdev_iscsi.so 00:02:40.927 SO libspdk_bdev_lvol.so.6.0 00:02:40.927 SYMLINK libspdk_bdev_virtio.so 00:02:40.927 SYMLINK libspdk_bdev_lvol.so 00:02:41.186 LIB libspdk_bdev_raid.a 00:02:41.186 SO libspdk_bdev_raid.so.6.0 00:02:41.186 SYMLINK libspdk_bdev_raid.so 00:02:42.125 LIB libspdk_bdev_nvme.a 00:02:42.125 SO libspdk_bdev_nvme.so.7.0 00:02:42.125 SYMLINK libspdk_bdev_nvme.so 00:02:42.694 CC module/event/subsystems/iobuf/iobuf.o 00:02:42.694 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:42.694 CC module/event/subsystems/vmd/vmd.o 00:02:42.694 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:42.694 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:42.694 CC module/event/subsystems/scheduler/scheduler.o 00:02:42.694 CC module/event/subsystems/keyring/keyring.o 00:02:42.694 CC module/event/subsystems/sock/sock.o 00:02:42.694 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:42.954 LIB libspdk_event_scheduler.a 00:02:42.954 LIB libspdk_event_iobuf.a 00:02:42.954 LIB libspdk_event_keyring.a 00:02:42.954 LIB libspdk_event_vmd.a 00:02:42.954 LIB libspdk_event_vhost_blk.a 00:02:42.954 LIB libspdk_event_sock.a 00:02:42.954 LIB libspdk_event_vfu_tgt.a 00:02:42.954 SO libspdk_event_scheduler.so.4.0 00:02:42.954 SO libspdk_event_keyring.so.1.0 00:02:42.954 SO libspdk_event_iobuf.so.3.0 00:02:42.954 SO libspdk_event_vfu_tgt.so.3.0 00:02:42.954 SO libspdk_event_vhost_blk.so.3.0 00:02:42.954 SO libspdk_event_vmd.so.6.0 00:02:42.955 SO libspdk_event_sock.so.5.0 00:02:42.955 SYMLINK libspdk_event_iobuf.so 00:02:42.955 SYMLINK libspdk_event_scheduler.so 00:02:42.955 SYMLINK libspdk_event_keyring.so 00:02:42.955 SYMLINK libspdk_event_vfu_tgt.so 00:02:42.955 SYMLINK libspdk_event_vhost_blk.so 00:02:42.955 SYMLINK libspdk_event_vmd.so 00:02:42.955 SYMLINK libspdk_event_sock.so 00:02:43.522 CC module/event/subsystems/accel/accel.o 00:02:43.522 LIB libspdk_event_accel.a 00:02:43.522 SO libspdk_event_accel.so.6.0 00:02:43.522 SYMLINK libspdk_event_accel.so 00:02:43.781 CC module/event/subsystems/bdev/bdev.o 00:02:44.041 LIB libspdk_event_bdev.a 00:02:44.041 SO libspdk_event_bdev.so.6.0 00:02:44.041 SYMLINK libspdk_event_bdev.so 00:02:44.610 CC module/event/subsystems/scsi/scsi.o 00:02:44.610 CC module/event/subsystems/nbd/nbd.o 00:02:44.610 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:44.610 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:44.610 CC module/event/subsystems/ublk/ublk.o 00:02:44.610 LIB libspdk_event_nbd.a 00:02:44.610 LIB libspdk_event_scsi.a 00:02:44.610 LIB libspdk_event_ublk.a 00:02:44.610 SO libspdk_event_nbd.so.6.0 00:02:44.610 SO libspdk_event_scsi.so.6.0 00:02:44.610 SO libspdk_event_ublk.so.3.0 00:02:44.610 LIB libspdk_event_nvmf.a 00:02:44.610 SYMLINK libspdk_event_nbd.so 00:02:44.610 SYMLINK libspdk_event_ublk.so 00:02:44.610 SYMLINK libspdk_event_scsi.so 00:02:44.610 SO libspdk_event_nvmf.so.6.0 00:02:44.870 SYMLINK libspdk_event_nvmf.so 00:02:45.129 CC module/event/subsystems/iscsi/iscsi.o 00:02:45.129 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:45.129 LIB libspdk_event_vhost_scsi.a 00:02:45.129 LIB libspdk_event_iscsi.a 00:02:45.129 SO libspdk_event_vhost_scsi.so.3.0 00:02:45.129 SO libspdk_event_iscsi.so.6.0 00:02:45.129 SYMLINK libspdk_event_vhost_scsi.so 00:02:45.388 SYMLINK libspdk_event_iscsi.so 00:02:45.388 SO libspdk.so.6.0 00:02:45.388 SYMLINK libspdk.so 00:02:45.647 CXX app/trace/trace.o 00:02:45.647 CC app/spdk_nvme_perf/perf.o 00:02:45.917 CC app/spdk_nvme_identify/identify.o 00:02:45.917 CC app/trace_record/trace_record.o 00:02:45.917 CC app/spdk_top/spdk_top.o 00:02:45.917 CC app/spdk_lspci/spdk_lspci.o 00:02:45.917 CC test/rpc_client/rpc_client_test.o 00:02:45.917 TEST_HEADER include/spdk/assert.h 00:02:45.917 TEST_HEADER include/spdk/accel.h 00:02:45.917 TEST_HEADER include/spdk/accel_module.h 00:02:45.917 CC app/spdk_nvme_discover/discovery_aer.o 00:02:45.917 TEST_HEADER include/spdk/barrier.h 00:02:45.917 TEST_HEADER include/spdk/base64.h 00:02:45.917 TEST_HEADER include/spdk/bdev_module.h 00:02:45.917 TEST_HEADER include/spdk/bdev.h 00:02:45.917 TEST_HEADER include/spdk/bdev_zone.h 00:02:45.917 TEST_HEADER include/spdk/bit_pool.h 00:02:45.917 TEST_HEADER include/spdk/bit_array.h 00:02:45.917 TEST_HEADER include/spdk/blob_bdev.h 00:02:45.917 TEST_HEADER include/spdk/blobfs.h 00:02:45.918 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:45.918 TEST_HEADER include/spdk/conf.h 00:02:45.918 TEST_HEADER include/spdk/blob.h 00:02:45.918 TEST_HEADER include/spdk/config.h 00:02:45.918 TEST_HEADER include/spdk/cpuset.h 00:02:45.918 TEST_HEADER include/spdk/crc32.h 00:02:45.918 TEST_HEADER include/spdk/crc16.h 00:02:45.918 TEST_HEADER include/spdk/crc64.h 00:02:45.918 TEST_HEADER include/spdk/dma.h 00:02:45.918 TEST_HEADER include/spdk/env_dpdk.h 00:02:45.918 TEST_HEADER include/spdk/dif.h 00:02:45.918 CC app/spdk_dd/spdk_dd.o 00:02:45.918 TEST_HEADER include/spdk/endian.h 00:02:45.918 TEST_HEADER include/spdk/fd_group.h 00:02:45.918 TEST_HEADER include/spdk/event.h 00:02:45.918 TEST_HEADER include/spdk/env.h 00:02:45.918 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:45.918 TEST_HEADER include/spdk/fd.h 00:02:45.918 TEST_HEADER include/spdk/ftl.h 00:02:45.918 TEST_HEADER include/spdk/file.h 00:02:45.918 TEST_HEADER include/spdk/gpt_spec.h 00:02:45.918 TEST_HEADER include/spdk/hexlify.h 00:02:45.918 TEST_HEADER include/spdk/histogram_data.h 00:02:45.918 TEST_HEADER include/spdk/idxd.h 00:02:45.918 TEST_HEADER include/spdk/idxd_spec.h 00:02:45.918 TEST_HEADER include/spdk/ioat.h 00:02:45.918 TEST_HEADER include/spdk/init.h 00:02:45.918 CC app/nvmf_tgt/nvmf_main.o 00:02:45.918 TEST_HEADER include/spdk/ioat_spec.h 00:02:45.918 TEST_HEADER include/spdk/iscsi_spec.h 00:02:45.918 TEST_HEADER include/spdk/json.h 00:02:45.918 TEST_HEADER include/spdk/jsonrpc.h 00:02:45.918 CC app/iscsi_tgt/iscsi_tgt.o 00:02:45.918 TEST_HEADER include/spdk/keyring_module.h 00:02:45.918 TEST_HEADER include/spdk/likely.h 00:02:45.918 TEST_HEADER include/spdk/keyring.h 00:02:45.918 TEST_HEADER include/spdk/log.h 00:02:45.918 TEST_HEADER include/spdk/lvol.h 00:02:45.918 TEST_HEADER include/spdk/memory.h 00:02:45.918 TEST_HEADER include/spdk/nbd.h 00:02:45.918 TEST_HEADER include/spdk/mmio.h 00:02:45.918 TEST_HEADER include/spdk/notify.h 00:02:45.918 TEST_HEADER include/spdk/nvme.h 00:02:45.918 TEST_HEADER include/spdk/nvme_intel.h 00:02:45.918 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:45.918 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:45.918 TEST_HEADER include/spdk/nvme_spec.h 00:02:45.918 TEST_HEADER include/spdk/nvme_zns.h 00:02:45.918 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:45.918 TEST_HEADER include/spdk/nvmf.h 00:02:45.918 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:45.918 TEST_HEADER include/spdk/nvmf_spec.h 00:02:45.918 TEST_HEADER include/spdk/opal_spec.h 00:02:45.918 TEST_HEADER include/spdk/nvmf_transport.h 00:02:45.918 TEST_HEADER include/spdk/opal.h 00:02:45.918 TEST_HEADER include/spdk/pci_ids.h 00:02:45.918 TEST_HEADER include/spdk/pipe.h 00:02:45.918 TEST_HEADER include/spdk/reduce.h 00:02:45.918 TEST_HEADER include/spdk/queue.h 00:02:45.918 TEST_HEADER include/spdk/rpc.h 00:02:45.918 TEST_HEADER include/spdk/scheduler.h 00:02:45.918 TEST_HEADER include/spdk/scsi.h 00:02:45.918 TEST_HEADER include/spdk/scsi_spec.h 00:02:45.918 TEST_HEADER include/spdk/stdinc.h 00:02:45.918 TEST_HEADER include/spdk/sock.h 00:02:45.918 TEST_HEADER include/spdk/trace.h 00:02:45.918 CC app/spdk_tgt/spdk_tgt.o 00:02:45.918 TEST_HEADER include/spdk/trace_parser.h 00:02:45.918 TEST_HEADER include/spdk/string.h 00:02:45.918 TEST_HEADER include/spdk/thread.h 00:02:45.918 TEST_HEADER include/spdk/ublk.h 00:02:45.918 TEST_HEADER include/spdk/util.h 00:02:45.918 TEST_HEADER include/spdk/tree.h 00:02:45.918 TEST_HEADER include/spdk/uuid.h 00:02:45.918 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:45.918 TEST_HEADER include/spdk/version.h 00:02:45.918 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:45.918 TEST_HEADER include/spdk/xor.h 00:02:45.918 TEST_HEADER include/spdk/vmd.h 00:02:45.918 TEST_HEADER include/spdk/vhost.h 00:02:45.918 TEST_HEADER include/spdk/zipf.h 00:02:45.918 CXX test/cpp_headers/accel.o 00:02:45.918 CXX test/cpp_headers/accel_module.o 00:02:45.918 CXX test/cpp_headers/assert.o 00:02:45.918 CXX test/cpp_headers/barrier.o 00:02:45.918 CXX test/cpp_headers/base64.o 00:02:45.918 CXX test/cpp_headers/bdev.o 00:02:45.918 CXX test/cpp_headers/bdev_module.o 00:02:45.918 CXX test/cpp_headers/bdev_zone.o 00:02:45.918 CXX test/cpp_headers/bit_pool.o 00:02:45.918 CXX test/cpp_headers/bit_array.o 00:02:45.918 CXX test/cpp_headers/blob_bdev.o 00:02:45.918 CXX test/cpp_headers/blobfs_bdev.o 00:02:45.918 CXX test/cpp_headers/blob.o 00:02:45.918 CXX test/cpp_headers/blobfs.o 00:02:45.918 CXX test/cpp_headers/config.o 00:02:45.918 CXX test/cpp_headers/conf.o 00:02:45.918 CXX test/cpp_headers/cpuset.o 00:02:45.918 CXX test/cpp_headers/crc32.o 00:02:45.918 CXX test/cpp_headers/crc16.o 00:02:45.918 CXX test/cpp_headers/crc64.o 00:02:45.918 CXX test/cpp_headers/dif.o 00:02:45.918 CXX test/cpp_headers/dma.o 00:02:45.918 CXX test/cpp_headers/endian.o 00:02:45.918 CXX test/cpp_headers/env.o 00:02:45.918 CXX test/cpp_headers/event.o 00:02:45.918 CXX test/cpp_headers/env_dpdk.o 00:02:45.918 CXX test/cpp_headers/fd_group.o 00:02:45.918 CXX test/cpp_headers/fd.o 00:02:45.918 CXX test/cpp_headers/gpt_spec.o 00:02:45.918 CXX test/cpp_headers/file.o 00:02:45.918 CXX test/cpp_headers/ftl.o 00:02:45.918 CXX test/cpp_headers/hexlify.o 00:02:45.918 CXX test/cpp_headers/idxd.o 00:02:45.918 CXX test/cpp_headers/histogram_data.o 00:02:45.918 CXX test/cpp_headers/init.o 00:02:45.918 CXX test/cpp_headers/idxd_spec.o 00:02:45.918 CXX test/cpp_headers/ioat_spec.o 00:02:45.918 CXX test/cpp_headers/iscsi_spec.o 00:02:45.918 CXX test/cpp_headers/ioat.o 00:02:45.918 CXX test/cpp_headers/json.o 00:02:45.918 CXX test/cpp_headers/jsonrpc.o 00:02:45.918 CXX test/cpp_headers/keyring.o 00:02:45.918 CXX test/cpp_headers/keyring_module.o 00:02:45.918 CXX test/cpp_headers/lvol.o 00:02:45.918 CXX test/cpp_headers/likely.o 00:02:45.918 CXX test/cpp_headers/log.o 00:02:45.918 CXX test/cpp_headers/memory.o 00:02:45.918 CXX test/cpp_headers/nbd.o 00:02:45.918 CXX test/cpp_headers/mmio.o 00:02:45.918 CXX test/cpp_headers/nvme.o 00:02:45.918 CXX test/cpp_headers/notify.o 00:02:45.918 CXX test/cpp_headers/nvme_intel.o 00:02:45.918 CXX test/cpp_headers/nvme_ocssd.o 00:02:45.918 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:45.918 CXX test/cpp_headers/nvme_spec.o 00:02:45.918 CC examples/ioat/verify/verify.o 00:02:45.918 CXX test/cpp_headers/nvme_zns.o 00:02:45.918 CXX test/cpp_headers/nvmf_cmd.o 00:02:45.918 CXX test/cpp_headers/nvmf.o 00:02:45.918 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:45.918 CXX test/cpp_headers/nvmf_spec.o 00:02:45.918 CXX test/cpp_headers/opal.o 00:02:45.918 CXX test/cpp_headers/opal_spec.o 00:02:45.918 CXX test/cpp_headers/nvmf_transport.o 00:02:45.918 CC examples/ioat/perf/perf.o 00:02:45.918 CXX test/cpp_headers/pci_ids.o 00:02:45.918 CXX test/cpp_headers/pipe.o 00:02:45.918 CXX test/cpp_headers/queue.o 00:02:45.918 CC examples/util/zipf/zipf.o 00:02:45.918 CXX test/cpp_headers/reduce.o 00:02:45.918 CC test/app/histogram_perf/histogram_perf.o 00:02:45.918 CC test/app/jsoncat/jsoncat.o 00:02:45.918 CC test/app/stub/stub.o 00:02:45.918 CXX test/cpp_headers/rpc.o 00:02:45.918 CC test/env/vtophys/vtophys.o 00:02:45.918 CC test/env/memory/memory_ut.o 00:02:45.918 CC app/fio/nvme/fio_plugin.o 00:02:46.188 CC test/env/pci/pci_ut.o 00:02:46.188 CC test/thread/poller_perf/poller_perf.o 00:02:46.188 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:46.188 CC app/fio/bdev/fio_plugin.o 00:02:46.188 CC test/dma/test_dma/test_dma.o 00:02:46.188 CXX test/cpp_headers/scheduler.o 00:02:46.188 CC test/app/bdev_svc/bdev_svc.o 00:02:46.188 LINK spdk_lspci 00:02:46.188 LINK spdk_nvme_discover 00:02:46.450 LINK spdk_trace_record 00:02:46.450 LINK rpc_client_test 00:02:46.450 LINK iscsi_tgt 00:02:46.450 LINK interrupt_tgt 00:02:46.450 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:46.450 LINK nvmf_tgt 00:02:46.450 LINK spdk_tgt 00:02:46.450 CXX test/cpp_headers/scsi.o 00:02:46.450 CC test/env/mem_callbacks/mem_callbacks.o 00:02:46.450 CXX test/cpp_headers/scsi_spec.o 00:02:46.450 CXX test/cpp_headers/stdinc.o 00:02:46.450 CXX test/cpp_headers/sock.o 00:02:46.450 CXX test/cpp_headers/string.o 00:02:46.450 CXX test/cpp_headers/thread.o 00:02:46.450 CXX test/cpp_headers/trace.o 00:02:46.450 CXX test/cpp_headers/trace_parser.o 00:02:46.450 CXX test/cpp_headers/tree.o 00:02:46.450 CXX test/cpp_headers/ublk.o 00:02:46.450 LINK env_dpdk_post_init 00:02:46.450 CXX test/cpp_headers/util.o 00:02:46.450 CXX test/cpp_headers/uuid.o 00:02:46.450 CXX test/cpp_headers/version.o 00:02:46.450 CXX test/cpp_headers/vfio_user_pci.o 00:02:46.450 CXX test/cpp_headers/vfio_user_spec.o 00:02:46.450 CXX test/cpp_headers/vhost.o 00:02:46.450 CXX test/cpp_headers/vmd.o 00:02:46.450 LINK stub 00:02:46.450 CXX test/cpp_headers/xor.o 00:02:46.450 CXX test/cpp_headers/zipf.o 00:02:46.450 LINK spdk_trace 00:02:46.450 LINK histogram_perf 00:02:46.450 LINK zipf 00:02:46.450 LINK jsoncat 00:02:46.450 LINK vtophys 00:02:46.708 LINK spdk_dd 00:02:46.708 LINK poller_perf 00:02:46.708 LINK verify 00:02:46.708 LINK ioat_perf 00:02:46.708 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:46.708 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:46.708 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:46.708 LINK bdev_svc 00:02:46.708 LINK pci_ut 00:02:46.966 LINK test_dma 00:02:46.966 LINK spdk_nvme 00:02:46.966 LINK spdk_bdev 00:02:46.966 CC app/vhost/vhost.o 00:02:46.966 LINK spdk_nvme_perf 00:02:46.966 LINK nvme_fuzz 00:02:46.966 LINK vhost_fuzz 00:02:46.966 CC test/event/reactor/reactor.o 00:02:46.966 CC test/event/reactor_perf/reactor_perf.o 00:02:46.966 CC test/event/event_perf/event_perf.o 00:02:46.966 CC test/event/app_repeat/app_repeat.o 00:02:46.966 CC examples/vmd/lsvmd/lsvmd.o 00:02:46.966 CC examples/vmd/led/led.o 00:02:46.966 CC test/event/scheduler/scheduler.o 00:02:46.966 CC examples/idxd/perf/perf.o 00:02:47.225 CC examples/sock/hello_world/hello_sock.o 00:02:47.225 LINK spdk_nvme_identify 00:02:47.225 CC examples/thread/thread/thread_ex.o 00:02:47.225 LINK vhost 00:02:47.225 LINK mem_callbacks 00:02:47.225 LINK spdk_top 00:02:47.225 LINK reactor 00:02:47.225 LINK reactor_perf 00:02:47.225 LINK event_perf 00:02:47.225 LINK lsvmd 00:02:47.225 LINK app_repeat 00:02:47.225 LINK led 00:02:47.225 CC test/nvme/reserve/reserve.o 00:02:47.225 LINK scheduler 00:02:47.225 CC test/nvme/startup/startup.o 00:02:47.225 CC test/nvme/aer/aer.o 00:02:47.225 CC test/nvme/connect_stress/connect_stress.o 00:02:47.225 CC test/nvme/overhead/overhead.o 00:02:47.225 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:47.225 CC test/nvme/cuse/cuse.o 00:02:47.225 CC test/nvme/fused_ordering/fused_ordering.o 00:02:47.225 CC test/nvme/err_injection/err_injection.o 00:02:47.225 CC test/nvme/fdp/fdp.o 00:02:47.225 CC test/nvme/reset/reset.o 00:02:47.225 CC test/nvme/simple_copy/simple_copy.o 00:02:47.225 CC test/nvme/compliance/nvme_compliance.o 00:02:47.225 CC test/nvme/boot_partition/boot_partition.o 00:02:47.225 LINK hello_sock 00:02:47.225 CC test/nvme/e2edp/nvme_dp.o 00:02:47.225 CC test/accel/dif/dif.o 00:02:47.225 CC test/nvme/sgl/sgl.o 00:02:47.484 CC test/blobfs/mkfs/mkfs.o 00:02:47.484 LINK memory_ut 00:02:47.484 LINK idxd_perf 00:02:47.484 LINK thread 00:02:47.484 CC test/lvol/esnap/esnap.o 00:02:47.484 LINK boot_partition 00:02:47.484 LINK startup 00:02:47.484 LINK doorbell_aers 00:02:47.484 LINK connect_stress 00:02:47.484 LINK err_injection 00:02:47.484 LINK reserve 00:02:47.484 LINK fused_ordering 00:02:47.484 LINK aer 00:02:47.484 LINK simple_copy 00:02:47.484 LINK mkfs 00:02:47.484 LINK reset 00:02:47.484 LINK nvme_dp 00:02:47.484 LINK sgl 00:02:47.484 LINK overhead 00:02:47.743 LINK fdp 00:02:47.743 LINK nvme_compliance 00:02:47.743 LINK dif 00:02:47.743 CC examples/nvme/hotplug/hotplug.o 00:02:47.743 CC examples/nvme/hello_world/hello_world.o 00:02:47.743 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:47.743 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:47.743 CC examples/nvme/abort/abort.o 00:02:47.743 CC examples/nvme/reconnect/reconnect.o 00:02:47.743 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:47.743 CC examples/nvme/arbitration/arbitration.o 00:02:48.002 CC examples/accel/perf/accel_perf.o 00:02:48.002 LINK pmr_persistence 00:02:48.002 LINK cmb_copy 00:02:48.002 LINK hotplug 00:02:48.002 LINK iscsi_fuzz 00:02:48.002 CC examples/blob/hello_world/hello_blob.o 00:02:48.002 CC examples/blob/cli/blobcli.o 00:02:48.002 LINK hello_world 00:02:48.002 LINK reconnect 00:02:48.002 LINK arbitration 00:02:48.002 LINK abort 00:02:48.261 LINK nvme_manage 00:02:48.261 LINK hello_blob 00:02:48.261 CC test/bdev/bdevio/bdevio.o 00:02:48.261 LINK accel_perf 00:02:48.261 LINK cuse 00:02:48.520 LINK blobcli 00:02:48.520 LINK bdevio 00:02:48.779 CC examples/bdev/hello_world/hello_bdev.o 00:02:48.779 CC examples/bdev/bdevperf/bdevperf.o 00:02:49.038 LINK hello_bdev 00:02:49.297 LINK bdevperf 00:02:49.866 CC examples/nvmf/nvmf/nvmf.o 00:02:50.126 LINK nvmf 00:02:51.063 LINK esnap 00:02:51.063 00:02:51.063 real 0m34.145s 00:02:51.063 user 5m9.090s 00:02:51.063 sys 2m27.363s 00:02:51.063 00:28:02 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:51.063 00:28:02 make -- common/autotest_common.sh@10 -- $ set +x 00:02:51.063 ************************************ 00:02:51.063 END TEST make 00:02:51.063 ************************************ 00:02:51.323 00:28:02 -- common/autotest_common.sh@1142 -- $ return 0 00:02:51.323 00:28:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:51.323 00:28:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:51.323 00:28:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:51.323 00:28:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.324 00:28:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:51.324 00:28:02 -- pm/common@44 -- $ pid=1074412 00:02:51.324 00:28:02 -- pm/common@50 -- $ kill -TERM 1074412 00:02:51.324 00:28:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.324 00:28:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:51.324 00:28:02 -- pm/common@44 -- $ pid=1074413 00:02:51.324 00:28:02 -- pm/common@50 -- $ kill -TERM 1074413 00:02:51.324 00:28:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.324 00:28:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:51.324 00:28:02 -- pm/common@44 -- $ pid=1074415 00:02:51.324 00:28:02 -- pm/common@50 -- $ kill -TERM 1074415 00:02:51.324 00:28:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.324 00:28:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:51.324 00:28:02 -- pm/common@44 -- $ pid=1074439 00:02:51.324 00:28:02 -- pm/common@50 -- $ sudo -E kill -TERM 1074439 00:02:51.324 00:28:02 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:51.324 00:28:02 -- nvmf/common.sh@7 -- # uname -s 00:02:51.324 00:28:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:51.324 00:28:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:51.324 00:28:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:51.324 00:28:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:51.324 00:28:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:51.324 00:28:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:51.324 00:28:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:51.324 00:28:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:51.324 00:28:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:51.324 00:28:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:51.324 00:28:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:51.324 00:28:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:51.324 00:28:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:51.324 00:28:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:51.324 00:28:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:51.324 00:28:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:51.324 00:28:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:51.324 00:28:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:51.324 00:28:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:51.324 00:28:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:51.324 00:28:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.324 00:28:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.324 00:28:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.324 00:28:02 -- paths/export.sh@5 -- # export PATH 00:02:51.324 00:28:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.324 00:28:02 -- nvmf/common.sh@47 -- # : 0 00:02:51.324 00:28:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:51.324 00:28:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:51.324 00:28:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:51.324 00:28:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:51.324 00:28:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:51.324 00:28:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:51.324 00:28:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:51.324 00:28:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:51.324 00:28:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:51.324 00:28:02 -- spdk/autotest.sh@32 -- # uname -s 00:02:51.324 00:28:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:51.324 00:28:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:51.324 00:28:02 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:51.324 00:28:02 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:51.324 00:28:02 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:51.324 00:28:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:51.324 00:28:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:51.324 00:28:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:51.324 00:28:02 -- spdk/autotest.sh@48 -- # udevadm_pid=1148236 00:02:51.324 00:28:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:51.324 00:28:02 -- pm/common@17 -- # local monitor 00:02:51.324 00:28:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:51.324 00:28:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.324 00:28:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.324 00:28:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.324 00:28:02 -- pm/common@21 -- # date +%s 00:02:51.324 00:28:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.324 00:28:02 -- pm/common@21 -- # date +%s 00:02:51.324 00:28:02 -- pm/common@25 -- # sleep 1 00:02:51.324 00:28:02 -- pm/common@21 -- # date +%s 00:02:51.324 00:28:02 -- pm/common@21 -- # date +%s 00:02:51.324 00:28:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720823282 00:02:51.324 00:28:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720823282 00:02:51.324 00:28:02 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720823282 00:02:51.324 00:28:02 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720823282 00:02:51.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720823282_collect-vmstat.pm.log 00:02:51.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720823282_collect-cpu-load.pm.log 00:02:51.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720823282_collect-cpu-temp.pm.log 00:02:51.324 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720823282_collect-bmc-pm.bmc.pm.log 00:02:52.261 00:28:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:52.261 00:28:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:52.261 00:28:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:52.261 00:28:03 -- common/autotest_common.sh@10 -- # set +x 00:02:52.261 00:28:03 -- spdk/autotest.sh@59 -- # create_test_list 00:02:52.261 00:28:03 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:52.261 00:28:03 -- common/autotest_common.sh@10 -- # set +x 00:02:52.521 00:28:03 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:52.521 00:28:03 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:52.521 00:28:03 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:52.521 00:28:03 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:52.521 00:28:03 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:52.521 00:28:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:52.521 00:28:03 -- common/autotest_common.sh@1455 -- # uname 00:02:52.521 00:28:03 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:52.521 00:28:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:52.521 00:28:03 -- common/autotest_common.sh@1475 -- # uname 00:02:52.521 00:28:03 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:52.521 00:28:03 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:52.521 00:28:03 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:52.521 00:28:03 -- spdk/autotest.sh@72 -- # hash lcov 00:02:52.521 00:28:03 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:52.521 00:28:03 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:52.521 --rc lcov_branch_coverage=1 00:02:52.521 --rc lcov_function_coverage=1 00:02:52.521 --rc genhtml_branch_coverage=1 00:02:52.521 --rc genhtml_function_coverage=1 00:02:52.521 --rc genhtml_legend=1 00:02:52.521 --rc geninfo_all_blocks=1 00:02:52.521 ' 00:02:52.521 00:28:03 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:52.521 --rc lcov_branch_coverage=1 00:02:52.521 --rc lcov_function_coverage=1 00:02:52.521 --rc genhtml_branch_coverage=1 00:02:52.521 --rc genhtml_function_coverage=1 00:02:52.521 --rc genhtml_legend=1 00:02:52.521 --rc geninfo_all_blocks=1 00:02:52.521 ' 00:02:52.521 00:28:03 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:52.521 --rc lcov_branch_coverage=1 00:02:52.521 --rc lcov_function_coverage=1 00:02:52.521 --rc genhtml_branch_coverage=1 00:02:52.521 --rc genhtml_function_coverage=1 00:02:52.521 --rc genhtml_legend=1 00:02:52.521 --rc geninfo_all_blocks=1 00:02:52.521 --no-external' 00:02:52.521 00:28:03 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:52.521 --rc lcov_branch_coverage=1 00:02:52.521 --rc lcov_function_coverage=1 00:02:52.521 --rc genhtml_branch_coverage=1 00:02:52.521 --rc genhtml_function_coverage=1 00:02:52.521 --rc genhtml_legend=1 00:02:52.521 --rc geninfo_all_blocks=1 00:02:52.521 --no-external' 00:02:52.521 00:28:03 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:52.521 lcov: LCOV version 1.14 00:02:52.521 00:28:03 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:56.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:56.715 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:56.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:56.715 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:56.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:56.715 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:56.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:56.715 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:56.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:56.715 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:56.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:56.715 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:56.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:56.715 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:56.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:56.715 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:56.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:56.715 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:56.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:56.715 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:56.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:56.715 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:56.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:56.715 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:56.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:56.715 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:56.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:56.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:56.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:56.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:11.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:11.632 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:16.900 00:28:28 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:16.900 00:28:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:16.900 00:28:28 -- common/autotest_common.sh@10 -- # set +x 00:03:16.900 00:28:28 -- spdk/autotest.sh@91 -- # rm -f 00:03:16.900 00:28:28 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.480 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:19.480 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:19.480 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:19.480 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:19.480 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:19.480 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:19.480 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:19.739 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:19.739 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:19.739 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:19.739 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:19.739 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:19.739 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:19.739 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:19.739 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:19.739 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:19.739 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:19.998 00:28:31 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:19.998 00:28:31 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:19.998 00:28:31 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:19.998 00:28:31 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:19.998 00:28:31 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:19.998 00:28:31 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:19.998 00:28:31 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:19.998 00:28:31 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:19.998 00:28:31 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:19.998 00:28:31 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:19.998 00:28:31 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:19.998 00:28:31 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:19.998 00:28:31 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:19.998 00:28:31 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:19.998 00:28:31 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:19.998 No valid GPT data, bailing 00:03:19.998 00:28:31 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:19.998 00:28:31 -- scripts/common.sh@391 -- # pt= 00:03:19.998 00:28:31 -- scripts/common.sh@392 -- # return 1 00:03:19.998 00:28:31 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:19.998 1+0 records in 00:03:19.998 1+0 records out 00:03:19.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00242711 s, 432 MB/s 00:03:19.998 00:28:31 -- spdk/autotest.sh@118 -- # sync 00:03:19.998 00:28:31 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:19.998 00:28:31 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:19.998 00:28:31 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:25.273 00:28:36 -- spdk/autotest.sh@124 -- # uname -s 00:03:25.273 00:28:36 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:25.273 00:28:36 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:25.273 00:28:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.273 00:28:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.273 00:28:36 -- common/autotest_common.sh@10 -- # set +x 00:03:25.273 ************************************ 00:03:25.273 START TEST setup.sh 00:03:25.273 ************************************ 00:03:25.273 00:28:36 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:25.273 * Looking for test storage... 00:03:25.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:25.273 00:28:36 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:25.273 00:28:36 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:25.273 00:28:36 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:25.273 00:28:36 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.273 00:28:36 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.273 00:28:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:25.273 ************************************ 00:03:25.273 START TEST acl 00:03:25.273 ************************************ 00:03:25.273 00:28:36 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:25.533 * Looking for test storage... 00:03:25.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:25.533 00:28:36 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:25.533 00:28:36 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:25.533 00:28:36 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:25.533 00:28:36 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:25.533 00:28:36 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:25.533 00:28:36 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:25.533 00:28:36 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:25.533 00:28:36 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:25.533 00:28:36 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:25.533 00:28:36 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:25.533 00:28:36 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:25.533 00:28:36 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:25.533 00:28:36 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:25.533 00:28:36 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:25.533 00:28:36 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.533 00:28:36 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.823 00:28:40 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:28.823 00:28:40 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:28.823 00:28:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.823 00:28:40 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:28.823 00:28:40 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.823 00:28:40 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:31.407 Hugepages 00:03:31.407 node hugesize free / total 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:03:31.407 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:31.407 00:28:42 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:31.407 00:28:42 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.407 00:28:42 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.407 00:28:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.666 ************************************ 00:03:31.666 START TEST denied 00:03:31.666 ************************************ 00:03:31.666 00:28:42 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:31.666 00:28:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:31.666 00:28:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:31.666 00:28:42 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:31.666 00:28:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.666 00:28:42 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.957 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:34.957 00:28:46 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:34.957 00:28:46 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:34.957 00:28:46 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:34.957 00:28:46 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:34.957 00:28:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:34.957 00:28:46 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:34.957 00:28:46 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:34.957 00:28:46 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:34.957 00:28:46 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.957 00:28:46 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.155 00:03:39.155 real 0m7.143s 00:03:39.155 user 0m2.362s 00:03:39.155 sys 0m4.065s 00:03:39.155 00:28:50 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.155 00:28:50 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:39.155 ************************************ 00:03:39.155 END TEST denied 00:03:39.155 ************************************ 00:03:39.155 00:28:50 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:39.155 00:28:50 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:39.155 00:28:50 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.155 00:28:50 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.155 00:28:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:39.155 ************************************ 00:03:39.155 START TEST allowed 00:03:39.155 ************************************ 00:03:39.155 00:28:50 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:39.155 00:28:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:39.155 00:28:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:39.155 00:28:50 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.155 00:28:50 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:39.155 00:28:50 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.351 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:43.351 00:28:54 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:43.351 00:28:54 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:43.351 00:28:54 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:43.351 00:28:54 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.351 00:28:54 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.888 00:03:45.889 real 0m7.045s 00:03:45.889 user 0m2.244s 00:03:45.889 sys 0m3.987s 00:03:45.889 00:28:57 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.889 00:28:57 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:45.889 ************************************ 00:03:45.889 END TEST allowed 00:03:45.889 ************************************ 00:03:45.889 00:28:57 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:45.889 00:03:45.889 real 0m20.494s 00:03:45.889 user 0m7.040s 00:03:45.889 sys 0m12.143s 00:03:45.889 00:28:57 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.889 00:28:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:45.889 ************************************ 00:03:45.889 END TEST acl 00:03:45.889 ************************************ 00:03:45.889 00:28:57 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:45.889 00:28:57 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:45.889 00:28:57 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.889 00:28:57 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.889 00:28:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:45.889 ************************************ 00:03:45.889 START TEST hugepages 00:03:45.889 ************************************ 00:03:45.889 00:28:57 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:45.889 * Looking for test storage... 00:03:45.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:45.889 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:45.889 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:45.889 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:45.889 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:45.889 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:45.889 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:45.889 00:28:57 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:45.889 00:28:57 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:45.889 00:28:57 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:45.889 00:28:57 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:45.889 00:28:57 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.889 00:28:57 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.889 00:28:57 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.889 00:28:57 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.889 00:28:57 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 172769220 kB' 'MemAvailable: 175620768 kB' 'Buffers: 3896 kB' 'Cached: 10904564 kB' 'SwapCached: 0 kB' 'Active: 7887128 kB' 'Inactive: 3492896 kB' 'Active(anon): 7499752 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474936 kB' 'Mapped: 208788 kB' 'Shmem: 7028188 kB' 'KReclaimable: 231452 kB' 'Slab: 745320 kB' 'SReclaimable: 231452 kB' 'SUnreclaim: 513868 kB' 'KernelStack: 20320 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 8975716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314648 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.150 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.151 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:46.152 00:28:57 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:46.152 00:28:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.152 00:28:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.152 00:28:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.152 ************************************ 00:03:46.152 START TEST default_setup 00:03:46.152 ************************************ 00:03:46.152 00:28:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:46.152 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:46.152 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:46.152 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:46.152 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:46.152 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:46.152 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.153 00:28:57 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.441 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:49.441 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:49.441 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:49.441 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:49.441 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:49.441 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:49.441 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:49.441 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:49.441 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:49.441 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:49.441 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:49.441 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:49.441 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:49.441 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:49.441 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:49.441 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:50.018 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174900772 kB' 'MemAvailable: 177752344 kB' 'Buffers: 3896 kB' 'Cached: 10904680 kB' 'SwapCached: 0 kB' 'Active: 7909576 kB' 'Inactive: 3492896 kB' 'Active(anon): 7522200 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496716 kB' 'Mapped: 209764 kB' 'Shmem: 7028304 kB' 'KReclaimable: 231500 kB' 'Slab: 742800 kB' 'SReclaimable: 231500 kB' 'SUnreclaim: 511300 kB' 'KernelStack: 20528 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9001840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314828 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.018 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174906588 kB' 'MemAvailable: 177758112 kB' 'Buffers: 3896 kB' 'Cached: 10904680 kB' 'SwapCached: 0 kB' 'Active: 7903968 kB' 'Inactive: 3492896 kB' 'Active(anon): 7516592 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491568 kB' 'Mapped: 208752 kB' 'Shmem: 7028304 kB' 'KReclaimable: 231404 kB' 'Slab: 742704 kB' 'SReclaimable: 231404 kB' 'SUnreclaim: 511300 kB' 'KernelStack: 20464 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8995736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314792 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.019 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174906088 kB' 'MemAvailable: 177757612 kB' 'Buffers: 3896 kB' 'Cached: 10904700 kB' 'SwapCached: 0 kB' 'Active: 7904204 kB' 'Inactive: 3492896 kB' 'Active(anon): 7516828 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491792 kB' 'Mapped: 208752 kB' 'Shmem: 7028324 kB' 'KReclaimable: 231404 kB' 'Slab: 742704 kB' 'SReclaimable: 231404 kB' 'SUnreclaim: 511300 kB' 'KernelStack: 20528 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8995760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.020 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.021 nr_hugepages=1024 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.021 resv_hugepages=0 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.021 surplus_hugepages=0 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.021 anon_hugepages=0 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174904284 kB' 'MemAvailable: 177755808 kB' 'Buffers: 3896 kB' 'Cached: 10904720 kB' 'SwapCached: 0 kB' 'Active: 7903872 kB' 'Inactive: 3492896 kB' 'Active(anon): 7516496 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491432 kB' 'Mapped: 208752 kB' 'Shmem: 7028344 kB' 'KReclaimable: 231404 kB' 'Slab: 742704 kB' 'SReclaimable: 231404 kB' 'SUnreclaim: 511300 kB' 'KernelStack: 20400 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8995780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314776 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.021 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84646916 kB' 'MemUsed: 13015768 kB' 'SwapCached: 0 kB' 'Active: 6080096 kB' 'Inactive: 3333524 kB' 'Active(anon): 5924516 kB' 'Inactive(anon): 0 kB' 'Active(file): 155580 kB' 'Inactive(file): 3333524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9149008 kB' 'Mapped: 133960 kB' 'AnonPages: 267748 kB' 'Shmem: 5659904 kB' 'KernelStack: 10792 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128236 kB' 'Slab: 369628 kB' 'SReclaimable: 128236 kB' 'SUnreclaim: 241392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.022 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:50.023 node0=1024 expecting 1024 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:50.023 00:03:50.023 real 0m4.035s 00:03:50.023 user 0m1.329s 00:03:50.023 sys 0m1.971s 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.023 00:29:01 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:50.023 ************************************ 00:03:50.023 END TEST default_setup 00:03:50.023 ************************************ 00:03:50.282 00:29:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:50.283 00:29:01 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:50.283 00:29:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.283 00:29:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.283 00:29:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.283 ************************************ 00:03:50.283 START TEST per_node_1G_alloc 00:03:50.283 ************************************ 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.283 00:29:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.817 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.817 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.817 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.817 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.817 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.817 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.817 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.817 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.817 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.817 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.817 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.817 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.817 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.817 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.084 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.084 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.084 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174883120 kB' 'MemAvailable: 177734644 kB' 'Buffers: 3896 kB' 'Cached: 10904808 kB' 'SwapCached: 0 kB' 'Active: 7904348 kB' 'Inactive: 3492896 kB' 'Active(anon): 7516972 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492216 kB' 'Mapped: 208768 kB' 'Shmem: 7028432 kB' 'KReclaimable: 231404 kB' 'Slab: 743656 kB' 'SReclaimable: 231404 kB' 'SUnreclaim: 512252 kB' 'KernelStack: 20336 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8993120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.084 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.085 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174885760 kB' 'MemAvailable: 177737284 kB' 'Buffers: 3896 kB' 'Cached: 10904808 kB' 'SwapCached: 0 kB' 'Active: 7903856 kB' 'Inactive: 3492896 kB' 'Active(anon): 7516480 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491252 kB' 'Mapped: 208760 kB' 'Shmem: 7028432 kB' 'KReclaimable: 231404 kB' 'Slab: 743784 kB' 'SReclaimable: 231404 kB' 'SUnreclaim: 512380 kB' 'KernelStack: 20304 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8993272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314840 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.086 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.087 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174887104 kB' 'MemAvailable: 177738628 kB' 'Buffers: 3896 kB' 'Cached: 10904828 kB' 'SwapCached: 0 kB' 'Active: 7904796 kB' 'Inactive: 3492896 kB' 'Active(anon): 7517420 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492196 kB' 'Mapped: 209264 kB' 'Shmem: 7028452 kB' 'KReclaimable: 231404 kB' 'Slab: 743784 kB' 'SReclaimable: 231404 kB' 'SUnreclaim: 512380 kB' 'KernelStack: 20272 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8995180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314776 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.088 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:53.089 nr_hugepages=1024 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.089 resv_hugepages=0 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.089 surplus_hugepages=0 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.089 anon_hugepages=0 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.089 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174882820 kB' 'MemAvailable: 177734344 kB' 'Buffers: 3896 kB' 'Cached: 10904840 kB' 'SwapCached: 0 kB' 'Active: 7908140 kB' 'Inactive: 3492896 kB' 'Active(anon): 7520764 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495628 kB' 'Mapped: 209264 kB' 'Shmem: 7028464 kB' 'KReclaimable: 231404 kB' 'Slab: 743784 kB' 'SReclaimable: 231404 kB' 'SUnreclaim: 512380 kB' 'KernelStack: 20368 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8998476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314792 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.090 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85683160 kB' 'MemUsed: 11979524 kB' 'SwapCached: 0 kB' 'Active: 6079720 kB' 'Inactive: 3333524 kB' 'Active(anon): 5924140 kB' 'Inactive(anon): 0 kB' 'Active(file): 155580 kB' 'Inactive(file): 3333524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9149028 kB' 'Mapped: 133968 kB' 'AnonPages: 267372 kB' 'Shmem: 5659924 kB' 'KernelStack: 10680 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128236 kB' 'Slab: 370496 kB' 'SReclaimable: 128236 kB' 'SUnreclaim: 242260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.091 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.092 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.354 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 89201488 kB' 'MemUsed: 4516980 kB' 'SwapCached: 0 kB' 'Active: 1824416 kB' 'Inactive: 159372 kB' 'Active(anon): 1592620 kB' 'Inactive(anon): 0 kB' 'Active(file): 231796 kB' 'Inactive(file): 159372 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1759776 kB' 'Mapped: 74792 kB' 'AnonPages: 224140 kB' 'Shmem: 1368608 kB' 'KernelStack: 9672 kB' 'PageTables: 4736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103168 kB' 'Slab: 373288 kB' 'SReclaimable: 103168 kB' 'SUnreclaim: 270120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.355 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:53.356 node0=512 expecting 512 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:53.356 node1=512 expecting 512 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:53.356 00:03:53.356 real 0m3.048s 00:03:53.356 user 0m1.233s 00:03:53.356 sys 0m1.885s 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.356 00:29:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:53.356 ************************************ 00:03:53.356 END TEST per_node_1G_alloc 00:03:53.356 ************************************ 00:03:53.356 00:29:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:53.356 00:29:04 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:53.356 00:29:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.356 00:29:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.356 00:29:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.356 ************************************ 00:03:53.356 START TEST even_2G_alloc 00:03:53.356 ************************************ 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:53.356 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.357 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:53.357 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:53.357 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:53.357 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.357 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:53.357 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:53.357 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:53.357 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.357 00:29:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.987 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.987 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:55.987 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.987 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.987 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.987 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.987 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.987 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.987 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.987 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.987 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.987 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.987 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.987 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.987 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.987 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.987 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174891948 kB' 'MemAvailable: 177743468 kB' 'Buffers: 3896 kB' 'Cached: 10904968 kB' 'SwapCached: 0 kB' 'Active: 7903216 kB' 'Inactive: 3492896 kB' 'Active(anon): 7515840 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490004 kB' 'Mapped: 207812 kB' 'Shmem: 7028592 kB' 'KReclaimable: 231396 kB' 'Slab: 743716 kB' 'SReclaimable: 231396 kB' 'SUnreclaim: 512320 kB' 'KernelStack: 20304 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8982892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.256 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.257 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174892484 kB' 'MemAvailable: 177744004 kB' 'Buffers: 3896 kB' 'Cached: 10904972 kB' 'SwapCached: 0 kB' 'Active: 7902400 kB' 'Inactive: 3492896 kB' 'Active(anon): 7515024 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489628 kB' 'Mapped: 207712 kB' 'Shmem: 7028596 kB' 'KReclaimable: 231396 kB' 'Slab: 743708 kB' 'SReclaimable: 231396 kB' 'SUnreclaim: 512312 kB' 'KernelStack: 20304 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8982912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.258 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.259 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174892548 kB' 'MemAvailable: 177744068 kB' 'Buffers: 3896 kB' 'Cached: 10904984 kB' 'SwapCached: 0 kB' 'Active: 7902272 kB' 'Inactive: 3492896 kB' 'Active(anon): 7514896 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489464 kB' 'Mapped: 207712 kB' 'Shmem: 7028608 kB' 'KReclaimable: 231396 kB' 'Slab: 743708 kB' 'SReclaimable: 231396 kB' 'SUnreclaim: 512312 kB' 'KernelStack: 20288 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8982932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.260 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.261 nr_hugepages=1024 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.261 resv_hugepages=0 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.261 surplus_hugepages=0 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.261 anon_hugepages=0 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.261 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174893380 kB' 'MemAvailable: 177744900 kB' 'Buffers: 3896 kB' 'Cached: 10905008 kB' 'SwapCached: 0 kB' 'Active: 7902444 kB' 'Inactive: 3492896 kB' 'Active(anon): 7515068 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489632 kB' 'Mapped: 207712 kB' 'Shmem: 7028632 kB' 'KReclaimable: 231396 kB' 'Slab: 743708 kB' 'SReclaimable: 231396 kB' 'SUnreclaim: 512312 kB' 'KernelStack: 20304 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8982952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.262 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85690452 kB' 'MemUsed: 11972232 kB' 'SwapCached: 0 kB' 'Active: 6079848 kB' 'Inactive: 3333524 kB' 'Active(anon): 5924268 kB' 'Inactive(anon): 0 kB' 'Active(file): 155580 kB' 'Inactive(file): 3333524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9149052 kB' 'Mapped: 133672 kB' 'AnonPages: 267420 kB' 'Shmem: 5659948 kB' 'KernelStack: 10648 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128236 kB' 'Slab: 370288 kB' 'SReclaimable: 128236 kB' 'SUnreclaim: 242052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.263 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:56.264 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 89202920 kB' 'MemUsed: 4515548 kB' 'SwapCached: 0 kB' 'Active: 1822648 kB' 'Inactive: 159372 kB' 'Active(anon): 1590852 kB' 'Inactive(anon): 0 kB' 'Active(file): 231796 kB' 'Inactive(file): 159372 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1759896 kB' 'Mapped: 74040 kB' 'AnonPages: 222204 kB' 'Shmem: 1368728 kB' 'KernelStack: 9656 kB' 'PageTables: 4604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103160 kB' 'Slab: 373420 kB' 'SReclaimable: 103160 kB' 'SUnreclaim: 270260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.265 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.266 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:56.593 node0=512 expecting 512 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:56.593 node1=512 expecting 512 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:56.593 00:03:56.593 real 0m3.064s 00:03:56.593 user 0m1.255s 00:03:56.593 sys 0m1.875s 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.593 00:29:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.593 ************************************ 00:03:56.593 END TEST even_2G_alloc 00:03:56.593 ************************************ 00:03:56.593 00:29:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:56.593 00:29:07 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:56.593 00:29:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.593 00:29:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.593 00:29:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.593 ************************************ 00:03:56.593 START TEST odd_alloc 00:03:56.593 ************************************ 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:56.593 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:56.594 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.594 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:56.594 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:56.594 00:29:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:56.594 00:29:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.594 00:29:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.132 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.132 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.132 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.132 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.132 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.132 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.132 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.132 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.132 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.132 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.132 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.132 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.132 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.132 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.132 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.132 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.132 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174900308 kB' 'MemAvailable: 177751812 kB' 'Buffers: 3896 kB' 'Cached: 10905128 kB' 'SwapCached: 0 kB' 'Active: 7896040 kB' 'Inactive: 3492896 kB' 'Active(anon): 7508664 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482192 kB' 'Mapped: 207856 kB' 'Shmem: 7028752 kB' 'KReclaimable: 231364 kB' 'Slab: 743700 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512336 kB' 'KernelStack: 20272 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8970696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314776 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.398 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174900960 kB' 'MemAvailable: 177752464 kB' 'Buffers: 3896 kB' 'Cached: 10905132 kB' 'SwapCached: 0 kB' 'Active: 7894424 kB' 'Inactive: 3492896 kB' 'Active(anon): 7507048 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481548 kB' 'Mapped: 207724 kB' 'Shmem: 7028756 kB' 'KReclaimable: 231364 kB' 'Slab: 743672 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512308 kB' 'KernelStack: 20256 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8970712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314760 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.399 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.400 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174901308 kB' 'MemAvailable: 177752812 kB' 'Buffers: 3896 kB' 'Cached: 10905148 kB' 'SwapCached: 0 kB' 'Active: 7894428 kB' 'Inactive: 3492896 kB' 'Active(anon): 7507052 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481552 kB' 'Mapped: 207724 kB' 'Shmem: 7028772 kB' 'KReclaimable: 231364 kB' 'Slab: 743672 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512308 kB' 'KernelStack: 20256 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8970732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314760 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.401 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.402 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:59.403 nr_hugepages=1025 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.403 resv_hugepages=0 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.403 surplus_hugepages=0 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.403 anon_hugepages=0 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174901308 kB' 'MemAvailable: 177752812 kB' 'Buffers: 3896 kB' 'Cached: 10905164 kB' 'SwapCached: 0 kB' 'Active: 7894728 kB' 'Inactive: 3492896 kB' 'Active(anon): 7507352 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481820 kB' 'Mapped: 207724 kB' 'Shmem: 7028788 kB' 'KReclaimable: 231364 kB' 'Slab: 743672 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512308 kB' 'KernelStack: 20288 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8970756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314776 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.403 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.404 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85705160 kB' 'MemUsed: 11957524 kB' 'SwapCached: 0 kB' 'Active: 6068332 kB' 'Inactive: 3333524 kB' 'Active(anon): 5912752 kB' 'Inactive(anon): 0 kB' 'Active(file): 155580 kB' 'Inactive(file): 3333524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9149096 kB' 'Mapped: 133684 kB' 'AnonPages: 255980 kB' 'Shmem: 5659992 kB' 'KernelStack: 10648 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128220 kB' 'Slab: 370368 kB' 'SReclaimable: 128220 kB' 'SUnreclaim: 242148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.405 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 89196716 kB' 'MemUsed: 4521752 kB' 'SwapCached: 0 kB' 'Active: 1826240 kB' 'Inactive: 159372 kB' 'Active(anon): 1594444 kB' 'Inactive(anon): 0 kB' 'Active(file): 231796 kB' 'Inactive(file): 159372 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1760008 kB' 'Mapped: 74040 kB' 'AnonPages: 225636 kB' 'Shmem: 1368840 kB' 'KernelStack: 9640 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103144 kB' 'Slab: 373304 kB' 'SReclaimable: 103144 kB' 'SUnreclaim: 270160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.406 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:59.407 node0=512 expecting 513 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:59.407 node1=513 expecting 512 00:03:59.407 00:29:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:59.407 00:03:59.407 real 0m3.049s 00:03:59.407 user 0m1.233s 00:03:59.407 sys 0m1.880s 00:03:59.408 00:29:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.408 00:29:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.408 ************************************ 00:03:59.408 END TEST odd_alloc 00:03:59.408 ************************************ 00:03:59.669 00:29:10 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:59.669 00:29:10 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:59.669 00:29:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.669 00:29:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.669 00:29:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.669 ************************************ 00:03:59.669 START TEST custom_alloc 00:03:59.669 ************************************ 00:03:59.669 00:29:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:59.669 00:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:59.669 00:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:59.669 00:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:59.669 00:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:59.669 00:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:59.669 00:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:59.669 00:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:59.669 00:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.669 00:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.669 00:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.669 00:29:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.207 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:02.207 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:02.207 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:02.207 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:02.207 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:02.207 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:02.207 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:02.207 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:02.207 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:02.207 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:02.207 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:02.207 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:02.207 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:02.207 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:02.207 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:02.207 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:02.207 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173854212 kB' 'MemAvailable: 176705716 kB' 'Buffers: 3896 kB' 'Cached: 10905284 kB' 'SwapCached: 0 kB' 'Active: 7896076 kB' 'Inactive: 3492896 kB' 'Active(anon): 7508700 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482600 kB' 'Mapped: 207812 kB' 'Shmem: 7028908 kB' 'KReclaimable: 231364 kB' 'Slab: 744064 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512700 kB' 'KernelStack: 20320 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8971364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314776 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.471 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173854204 kB' 'MemAvailable: 176705708 kB' 'Buffers: 3896 kB' 'Cached: 10905288 kB' 'SwapCached: 0 kB' 'Active: 7894920 kB' 'Inactive: 3492896 kB' 'Active(anon): 7507544 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481916 kB' 'Mapped: 207736 kB' 'Shmem: 7028912 kB' 'KReclaimable: 231364 kB' 'Slab: 744044 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512680 kB' 'KernelStack: 20288 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8971380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314728 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.472 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.473 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173856204 kB' 'MemAvailable: 176707708 kB' 'Buffers: 3896 kB' 'Cached: 10905304 kB' 'SwapCached: 0 kB' 'Active: 7895592 kB' 'Inactive: 3492896 kB' 'Active(anon): 7508216 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482532 kB' 'Mapped: 207744 kB' 'Shmem: 7028928 kB' 'KReclaimable: 231364 kB' 'Slab: 744036 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512672 kB' 'KernelStack: 20256 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8972900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314680 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.474 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.475 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:02.476 nr_hugepages=1536 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.476 resv_hugepages=0 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.476 surplus_hugepages=0 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.476 anon_hugepages=0 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173856864 kB' 'MemAvailable: 176708368 kB' 'Buffers: 3896 kB' 'Cached: 10905320 kB' 'SwapCached: 0 kB' 'Active: 7895260 kB' 'Inactive: 3492896 kB' 'Active(anon): 7507884 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482188 kB' 'Mapped: 207744 kB' 'Shmem: 7028944 kB' 'KReclaimable: 231364 kB' 'Slab: 744040 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512676 kB' 'KernelStack: 20352 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8974048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314696 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.476 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.477 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.478 00:29:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85705112 kB' 'MemUsed: 11957572 kB' 'SwapCached: 0 kB' 'Active: 6070072 kB' 'Inactive: 3333524 kB' 'Active(anon): 5914492 kB' 'Inactive(anon): 0 kB' 'Active(file): 155580 kB' 'Inactive(file): 3333524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9149200 kB' 'Mapped: 133704 kB' 'AnonPages: 257608 kB' 'Shmem: 5660096 kB' 'KernelStack: 10696 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128220 kB' 'Slab: 370468 kB' 'SReclaimable: 128220 kB' 'SUnreclaim: 242248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.478 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:02.479 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:02.739 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.739 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.739 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.739 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88149312 kB' 'MemUsed: 5569156 kB' 'SwapCached: 0 kB' 'Active: 1826600 kB' 'Inactive: 159372 kB' 'Active(anon): 1594804 kB' 'Inactive(anon): 0 kB' 'Active(file): 231796 kB' 'Inactive(file): 159372 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1760040 kB' 'Mapped: 74040 kB' 'AnonPages: 225960 kB' 'Shmem: 1368872 kB' 'KernelStack: 9864 kB' 'PageTables: 4900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103144 kB' 'Slab: 373572 kB' 'SReclaimable: 103144 kB' 'SUnreclaim: 270428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.740 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:02.741 node0=512 expecting 512 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:02.741 node1=1024 expecting 1024 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:02.741 00:04:02.741 real 0m3.044s 00:04:02.741 user 0m1.205s 00:04:02.741 sys 0m1.907s 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.741 00:29:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.741 ************************************ 00:04:02.741 END TEST custom_alloc 00:04:02.741 ************************************ 00:04:02.741 00:29:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:02.741 00:29:14 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:02.741 00:29:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.741 00:29:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.741 00:29:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.741 ************************************ 00:04:02.741 START TEST no_shrink_alloc 00:04:02.741 ************************************ 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.741 00:29:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.280 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.280 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.280 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.280 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.280 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.280 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.280 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.280 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.280 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.280 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.280 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.280 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.280 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.280 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.280 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.544 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.544 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.544 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:05.544 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.544 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.544 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.544 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174924496 kB' 'MemAvailable: 177776000 kB' 'Buffers: 3896 kB' 'Cached: 10905432 kB' 'SwapCached: 0 kB' 'Active: 7897232 kB' 'Inactive: 3492896 kB' 'Active(anon): 7509856 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484032 kB' 'Mapped: 207764 kB' 'Shmem: 7029056 kB' 'KReclaimable: 231364 kB' 'Slab: 743432 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512068 kB' 'KernelStack: 20640 kB' 'PageTables: 9360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8973228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315032 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.545 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174924844 kB' 'MemAvailable: 177776348 kB' 'Buffers: 3896 kB' 'Cached: 10905436 kB' 'SwapCached: 0 kB' 'Active: 7896992 kB' 'Inactive: 3492896 kB' 'Active(anon): 7509616 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483756 kB' 'Mapped: 207764 kB' 'Shmem: 7029060 kB' 'KReclaimable: 231364 kB' 'Slab: 743544 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512180 kB' 'KernelStack: 20464 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8974492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314920 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.546 00:29:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.546 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.547 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174924772 kB' 'MemAvailable: 177776276 kB' 'Buffers: 3896 kB' 'Cached: 10905456 kB' 'SwapCached: 0 kB' 'Active: 7896768 kB' 'Inactive: 3492896 kB' 'Active(anon): 7509392 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483480 kB' 'Mapped: 207764 kB' 'Shmem: 7029080 kB' 'KReclaimable: 231364 kB' 'Slab: 743544 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512180 kB' 'KernelStack: 20656 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8974764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314920 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.548 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.549 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.550 nr_hugepages=1024 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.550 resv_hugepages=0 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.550 surplus_hugepages=0 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.550 anon_hugepages=0 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174924196 kB' 'MemAvailable: 177775700 kB' 'Buffers: 3896 kB' 'Cached: 10905472 kB' 'SwapCached: 0 kB' 'Active: 7896684 kB' 'Inactive: 3492896 kB' 'Active(anon): 7509308 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482900 kB' 'Mapped: 207764 kB' 'Shmem: 7029096 kB' 'KReclaimable: 231364 kB' 'Slab: 743780 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512416 kB' 'KernelStack: 20608 kB' 'PageTables: 9280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8973288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.550 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.551 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.552 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.812 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.812 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.812 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.812 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.812 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.812 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84661092 kB' 'MemUsed: 13001592 kB' 'SwapCached: 0 kB' 'Active: 6070832 kB' 'Inactive: 3333524 kB' 'Active(anon): 5915252 kB' 'Inactive(anon): 0 kB' 'Active(file): 155580 kB' 'Inactive(file): 3333524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9149356 kB' 'Mapped: 133716 kB' 'AnonPages: 258200 kB' 'Shmem: 5660252 kB' 'KernelStack: 10744 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128220 kB' 'Slab: 370420 kB' 'SReclaimable: 128220 kB' 'SUnreclaim: 242200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.813 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.814 node0=1024 expecting 1024 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.814 00:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:08.349 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:08.349 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:08.349 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:08.349 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:08.349 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:08.349 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:08.349 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:08.349 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:08.349 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:08.349 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:08.349 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:08.349 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:08.349 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:08.349 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:08.349 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:08.349 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:08.349 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:08.349 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.613 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174903112 kB' 'MemAvailable: 177754616 kB' 'Buffers: 3896 kB' 'Cached: 10905564 kB' 'SwapCached: 0 kB' 'Active: 7897448 kB' 'Inactive: 3492896 kB' 'Active(anon): 7510072 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484052 kB' 'Mapped: 207780 kB' 'Shmem: 7029188 kB' 'KReclaimable: 231364 kB' 'Slab: 743956 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512592 kB' 'KernelStack: 20384 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8972616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.614 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174902932 kB' 'MemAvailable: 177754436 kB' 'Buffers: 3896 kB' 'Cached: 10905564 kB' 'SwapCached: 0 kB' 'Active: 7897376 kB' 'Inactive: 3492896 kB' 'Active(anon): 7510000 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484024 kB' 'Mapped: 207780 kB' 'Shmem: 7029188 kB' 'KReclaimable: 231364 kB' 'Slab: 743948 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512584 kB' 'KernelStack: 20336 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8972636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314824 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.615 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174903452 kB' 'MemAvailable: 177754956 kB' 'Buffers: 3896 kB' 'Cached: 10905580 kB' 'SwapCached: 0 kB' 'Active: 7896752 kB' 'Inactive: 3492896 kB' 'Active(anon): 7509376 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483344 kB' 'Mapped: 207780 kB' 'Shmem: 7029204 kB' 'KReclaimable: 231364 kB' 'Slab: 744008 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512644 kB' 'KernelStack: 20320 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8972656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314840 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.616 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.617 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.618 nr_hugepages=1024 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.618 resv_hugepages=0 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.618 surplus_hugepages=0 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.618 anon_hugepages=0 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174907648 kB' 'MemAvailable: 177759152 kB' 'Buffers: 3896 kB' 'Cached: 10905608 kB' 'SwapCached: 0 kB' 'Active: 7897940 kB' 'Inactive: 3492896 kB' 'Active(anon): 7510564 kB' 'Inactive(anon): 0 kB' 'Active(file): 387376 kB' 'Inactive(file): 3492896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484748 kB' 'Mapped: 207780 kB' 'Shmem: 7029232 kB' 'KReclaimable: 231364 kB' 'Slab: 744008 kB' 'SReclaimable: 231364 kB' 'SUnreclaim: 512644 kB' 'KernelStack: 20320 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8972680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314840 kB' 'VmallocChunk: 0 kB' 'Percpu: 66432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2331604 kB' 'DirectMap2M: 10979328 kB' 'DirectMap1G: 188743680 kB' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.618 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84638428 kB' 'MemUsed: 13024256 kB' 'SwapCached: 0 kB' 'Active: 6071568 kB' 'Inactive: 3333524 kB' 'Active(anon): 5915988 kB' 'Inactive(anon): 0 kB' 'Active(file): 155580 kB' 'Inactive(file): 3333524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9149480 kB' 'Mapped: 133740 kB' 'AnonPages: 258832 kB' 'Shmem: 5660376 kB' 'KernelStack: 10696 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128220 kB' 'Slab: 370476 kB' 'SReclaimable: 128220 kB' 'SUnreclaim: 242256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.619 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:08.620 node0=1024 expecting 1024 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:08.620 00:04:08.620 real 0m5.974s 00:04:08.620 user 0m2.390s 00:04:08.620 sys 0m3.721s 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.620 00:29:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:08.620 ************************************ 00:04:08.620 END TEST no_shrink_alloc 00:04:08.620 ************************************ 00:04:08.620 00:29:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:08.620 00:29:20 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:08.620 00:29:20 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:08.620 00:29:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.620 00:29:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.620 00:29:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.620 00:29:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.620 00:29:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.620 00:29:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.620 00:29:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.620 00:29:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.620 00:29:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.620 00:29:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.620 00:29:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:08.620 00:29:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:08.620 00:04:08.620 real 0m22.776s 00:04:08.620 user 0m8.896s 00:04:08.620 sys 0m13.588s 00:04:08.620 00:29:20 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.620 00:29:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.620 ************************************ 00:04:08.620 END TEST hugepages 00:04:08.620 ************************************ 00:04:08.620 00:29:20 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:08.620 00:29:20 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:08.620 00:29:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.620 00:29:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.620 00:29:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.879 ************************************ 00:04:08.879 START TEST driver 00:04:08.879 ************************************ 00:04:08.879 00:29:20 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:08.879 * Looking for test storage... 00:04:08.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:08.879 00:29:20 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:08.879 00:29:20 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.879 00:29:20 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.082 00:29:24 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:13.082 00:29:24 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.082 00:29:24 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.082 00:29:24 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.082 ************************************ 00:04:13.082 START TEST guess_driver 00:04:13.082 ************************************ 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:13.082 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:13.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:13.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:13.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:13.083 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:13.083 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:13.083 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:13.083 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:13.083 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:13.083 Looking for driver=vfio-pci 00:04:13.083 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.083 00:29:24 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:13.083 00:29:24 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.083 00:29:24 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:15.617 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.617 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.617 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.617 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.617 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.617 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.617 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.617 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.617 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.877 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.878 00:29:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.815 00:29:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.815 00:29:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:16.815 00:29:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.815 00:29:28 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:16.815 00:29:28 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:16.815 00:29:28 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.815 00:29:28 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.009 00:04:21.009 real 0m7.930s 00:04:21.009 user 0m2.340s 00:04:21.009 sys 0m4.067s 00:04:21.009 00:29:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.009 00:29:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:21.009 ************************************ 00:04:21.009 END TEST guess_driver 00:04:21.009 ************************************ 00:04:21.009 00:29:32 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:21.009 00:04:21.009 real 0m12.150s 00:04:21.009 user 0m3.580s 00:04:21.009 sys 0m6.245s 00:04:21.009 00:29:32 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.009 00:29:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:21.009 ************************************ 00:04:21.009 END TEST driver 00:04:21.009 ************************************ 00:04:21.009 00:29:32 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:21.009 00:29:32 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:21.009 00:29:32 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.009 00:29:32 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.009 00:29:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.009 ************************************ 00:04:21.009 START TEST devices 00:04:21.009 ************************************ 00:04:21.009 00:29:32 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:21.009 * Looking for test storage... 00:04:21.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:21.009 00:29:32 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:21.009 00:29:32 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:21.009 00:29:32 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.009 00:29:32 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:24.300 00:29:35 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:24.300 00:29:35 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:24.300 00:29:35 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:24.300 00:29:35 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:24.300 00:29:35 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:24.300 00:29:35 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:24.300 00:29:35 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.300 00:29:35 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:24.300 00:29:35 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:24.300 00:29:35 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:24.300 No valid GPT data, bailing 00:04:24.300 00:29:35 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:24.300 00:29:35 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:24.300 00:29:35 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:24.300 00:29:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:24.300 00:29:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:24.300 00:29:35 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:24.300 00:29:35 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:24.300 00:29:35 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.300 00:29:35 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.300 00:29:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:24.300 ************************************ 00:04:24.300 START TEST nvme_mount 00:04:24.300 ************************************ 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:24.300 00:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:25.239 Creating new GPT entries in memory. 00:04:25.240 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.240 other utilities. 00:04:25.240 00:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.240 00:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.240 00:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.240 00:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.240 00:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:26.620 Creating new GPT entries in memory. 00:04:26.620 The operation has completed successfully. 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1180218 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.620 00:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.157 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.157 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:29.157 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:29.157 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.157 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.157 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.157 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.157 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.157 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.157 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.157 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:29.158 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.418 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.418 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.418 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:29.418 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.418 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.418 00:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:29.677 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:29.677 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:29.677 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:29.677 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:29.677 00:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:29.677 00:29:41 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:29.677 00:29:41 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.677 00:29:41 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:29.677 00:29:41 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:29.677 00:29:41 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.677 00:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.677 00:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:29.677 00:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:29.678 00:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.678 00:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.678 00:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:29.678 00:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.678 00:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:29.678 00:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:29.678 00:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.678 00:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:29.678 00:29:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:29.678 00:29:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.678 00:29:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.214 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.473 00:29:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:35.767 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:35.767 00:04:35.767 real 0m11.049s 00:04:35.767 user 0m3.267s 00:04:35.767 sys 0m5.622s 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.767 00:29:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:35.767 ************************************ 00:04:35.767 END TEST nvme_mount 00:04:35.767 ************************************ 00:04:35.767 00:29:46 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:35.767 00:29:46 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:35.767 00:29:46 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.767 00:29:46 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.767 00:29:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:35.767 ************************************ 00:04:35.767 START TEST dm_mount 00:04:35.767 ************************************ 00:04:35.767 00:29:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:35.767 00:29:46 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:35.767 00:29:46 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:35.767 00:29:46 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:35.767 00:29:46 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:35.767 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:35.767 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:35.767 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:35.767 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:35.767 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:35.767 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:35.767 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:35.767 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.767 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.767 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:35.768 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.768 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.768 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:35.768 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.768 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:35.768 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:35.768 00:29:46 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:36.337 Creating new GPT entries in memory. 00:04:36.337 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:36.337 other utilities. 00:04:36.337 00:29:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:36.337 00:29:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.337 00:29:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:36.337 00:29:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:36.337 00:29:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:37.719 Creating new GPT entries in memory. 00:04:37.719 The operation has completed successfully. 00:04:37.719 00:29:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:37.719 00:29:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:37.719 00:29:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:37.719 00:29:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:37.719 00:29:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:38.658 The operation has completed successfully. 00:04:38.658 00:29:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:38.658 00:29:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.658 00:29:49 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1184423 00:04:38.658 00:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:38.658 00:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.658 00:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:38.658 00:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:38.658 00:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:38.658 00:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:38.658 00:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:38.658 00:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:38.658 00:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:38.658 00:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:38.659 00:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:38.659 00:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:38.659 00:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:38.659 00:29:49 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.659 00:29:49 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:38.659 00:29:49 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.659 00:29:49 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:38.659 00:29:49 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:38.659 00:29:50 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.659 00:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:38.659 00:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:38.659 00:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:38.659 00:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:38.659 00:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:38.659 00:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.659 00:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:38.659 00:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:38.659 00:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.659 00:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.659 00:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:38.659 00:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.659 00:29:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.659 00:29:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:41.270 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.529 00:29:52 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:44.063 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.322 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.322 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:44.322 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:44.322 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:44.322 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.322 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:44.322 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:44.322 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:44.322 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:44.322 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:44.322 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:44.322 00:29:55 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:44.322 00:04:44.322 real 0m8.910s 00:04:44.322 user 0m2.204s 00:04:44.322 sys 0m3.740s 00:04:44.322 00:29:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.322 00:29:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:44.322 ************************************ 00:04:44.322 END TEST dm_mount 00:04:44.322 ************************************ 00:04:44.322 00:29:55 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:44.322 00:29:55 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:44.322 00:29:55 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:44.322 00:29:55 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.322 00:29:55 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:44.322 00:29:55 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:44.322 00:29:55 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:44.322 00:29:55 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:44.581 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:44.581 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:44.581 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:44.581 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:44.581 00:29:56 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:44.581 00:29:56 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.581 00:29:56 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:44.581 00:29:56 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:44.581 00:29:56 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:44.581 00:29:56 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:44.581 00:29:56 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:44.581 00:04:44.581 real 0m23.674s 00:04:44.581 user 0m6.776s 00:04:44.581 sys 0m11.649s 00:04:44.581 00:29:56 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.581 00:29:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:44.581 ************************************ 00:04:44.581 END TEST devices 00:04:44.581 ************************************ 00:04:44.581 00:29:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:44.581 00:04:44.581 real 1m19.475s 00:04:44.581 user 0m26.451s 00:04:44.581 sys 0m43.875s 00:04:44.581 00:29:56 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.581 00:29:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:44.581 ************************************ 00:04:44.581 END TEST setup.sh 00:04:44.581 ************************************ 00:04:44.841 00:29:56 -- common/autotest_common.sh@1142 -- # return 0 00:04:44.841 00:29:56 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:47.377 Hugepages 00:04:47.377 node hugesize free / total 00:04:47.377 node0 1048576kB 0 / 0 00:04:47.377 node0 2048kB 2048 / 2048 00:04:47.377 node1 1048576kB 0 / 0 00:04:47.377 node1 2048kB 0 / 0 00:04:47.377 00:04:47.377 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:47.377 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:47.377 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:47.377 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:47.377 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:47.377 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:47.377 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:47.377 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:47.377 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:47.636 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:47.636 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:47.636 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:47.636 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:47.636 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:47.636 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:47.636 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:47.636 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:47.636 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:47.636 00:29:59 -- spdk/autotest.sh@130 -- # uname -s 00:04:47.636 00:29:59 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:47.636 00:29:59 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:47.636 00:29:59 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:50.926 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:50.926 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:50.926 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:50.926 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:50.926 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:50.926 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:50.926 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:50.926 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:50.926 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:50.926 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:50.926 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:50.926 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:50.926 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:50.926 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:50.926 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:50.926 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:51.196 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:51.455 00:30:02 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:52.391 00:30:03 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:52.391 00:30:03 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:52.391 00:30:03 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:52.391 00:30:03 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:52.391 00:30:03 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:52.392 00:30:03 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:52.392 00:30:03 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.392 00:30:03 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:52.392 00:30:03 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:52.651 00:30:03 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:52.651 00:30:03 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:52.651 00:30:03 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:55.187 Waiting for block devices as requested 00:04:55.187 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:55.445 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:55.445 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:55.446 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:55.705 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:55.705 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:55.705 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:55.964 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:55.964 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:55.964 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:55.964 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:56.223 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:56.223 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:56.223 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:56.483 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:56.483 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:56.483 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:56.742 00:30:08 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:56.742 00:30:08 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:56.742 00:30:08 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:56.742 00:30:08 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:56.742 00:30:08 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:56.742 00:30:08 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:56.742 00:30:08 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:56.742 00:30:08 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:56.742 00:30:08 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:56.742 00:30:08 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:56.742 00:30:08 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:56.742 00:30:08 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:56.742 00:30:08 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:56.742 00:30:08 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:56.742 00:30:08 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:56.742 00:30:08 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:56.742 00:30:08 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:56.743 00:30:08 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:56.743 00:30:08 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:56.743 00:30:08 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:56.743 00:30:08 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:56.743 00:30:08 -- common/autotest_common.sh@1557 -- # continue 00:04:56.743 00:30:08 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:56.743 00:30:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:56.743 00:30:08 -- common/autotest_common.sh@10 -- # set +x 00:04:56.743 00:30:08 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:56.743 00:30:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.743 00:30:08 -- common/autotest_common.sh@10 -- # set +x 00:04:56.743 00:30:08 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.279 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:59.279 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:59.537 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:59.538 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:59.538 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:59.538 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:59.538 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:59.538 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:59.538 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:59.538 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:59.538 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:59.538 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:59.538 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:59.538 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:59.538 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:59.538 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:00.476 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:00.476 00:30:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:00.476 00:30:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.476 00:30:11 -- common/autotest_common.sh@10 -- # set +x 00:05:00.476 00:30:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:00.476 00:30:11 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:00.476 00:30:11 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:00.476 00:30:11 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:00.476 00:30:11 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:00.476 00:30:11 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:00.476 00:30:11 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:00.476 00:30:11 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:00.476 00:30:11 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.476 00:30:11 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:00.476 00:30:11 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:00.735 00:30:12 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:00.735 00:30:12 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:00.735 00:30:12 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:00.735 00:30:12 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:00.735 00:30:12 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:00.735 00:30:12 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:00.735 00:30:12 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:00.735 00:30:12 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:05:00.735 00:30:12 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:05:00.735 00:30:12 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1193734 00:05:00.735 00:30:12 -- common/autotest_common.sh@1598 -- # waitforlisten 1193734 00:05:00.735 00:30:12 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.735 00:30:12 -- common/autotest_common.sh@829 -- # '[' -z 1193734 ']' 00:05:00.735 00:30:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.735 00:30:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.735 00:30:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.735 00:30:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.735 00:30:12 -- common/autotest_common.sh@10 -- # set +x 00:05:00.735 [2024-07-13 00:30:12.130230] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:00.735 [2024-07-13 00:30:12.130277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193734 ] 00:05:00.735 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.735 [2024-07-13 00:30:12.199034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.735 [2024-07-13 00:30:12.240093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.719 00:30:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.719 00:30:12 -- common/autotest_common.sh@862 -- # return 0 00:05:01.719 00:30:12 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:01.719 00:30:12 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:01.719 00:30:12 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:05.008 nvme0n1 00:05:05.008 00:30:15 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:05.008 [2024-07-13 00:30:16.076093] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:05.008 request: 00:05:05.008 { 00:05:05.008 "nvme_ctrlr_name": "nvme0", 00:05:05.008 "password": "test", 00:05:05.008 "method": "bdev_nvme_opal_revert", 00:05:05.008 "req_id": 1 00:05:05.008 } 00:05:05.008 Got JSON-RPC error response 00:05:05.008 response: 00:05:05.008 { 00:05:05.008 "code": -32602, 00:05:05.008 "message": "Invalid parameters" 00:05:05.008 } 00:05:05.008 00:30:16 -- common/autotest_common.sh@1604 -- # true 00:05:05.008 00:30:16 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:05.008 00:30:16 -- common/autotest_common.sh@1608 -- # killprocess 1193734 00:05:05.008 00:30:16 -- common/autotest_common.sh@948 -- # '[' -z 1193734 ']' 00:05:05.008 00:30:16 -- common/autotest_common.sh@952 -- # kill -0 1193734 00:05:05.008 00:30:16 -- common/autotest_common.sh@953 -- # uname 00:05:05.008 00:30:16 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.008 00:30:16 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1193734 00:05:05.008 00:30:16 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.008 00:30:16 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.008 00:30:16 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1193734' 00:05:05.008 killing process with pid 1193734 00:05:05.008 00:30:16 -- common/autotest_common.sh@967 -- # kill 1193734 00:05:05.008 00:30:16 -- common/autotest_common.sh@972 -- # wait 1193734 00:05:06.387 00:30:17 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:06.387 00:30:17 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:06.387 00:30:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:06.387 00:30:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:06.387 00:30:17 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:06.387 00:30:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:06.387 00:30:17 -- common/autotest_common.sh@10 -- # set +x 00:05:06.387 00:30:17 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:06.387 00:30:17 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:06.387 00:30:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.387 00:30:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.387 00:30:17 -- common/autotest_common.sh@10 -- # set +x 00:05:06.388 ************************************ 00:05:06.388 START TEST env 00:05:06.388 ************************************ 00:05:06.388 00:30:17 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:06.388 * Looking for test storage... 00:05:06.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:06.388 00:30:17 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:06.388 00:30:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.388 00:30:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.388 00:30:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.388 ************************************ 00:05:06.388 START TEST env_memory 00:05:06.388 ************************************ 00:05:06.388 00:30:17 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:06.388 00:05:06.388 00:05:06.388 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.388 http://cunit.sourceforge.net/ 00:05:06.388 00:05:06.388 00:05:06.388 Suite: memory 00:05:06.388 Test: alloc and free memory map ...[2024-07-13 00:30:17.937165] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:06.648 passed 00:05:06.648 Test: mem map translation ...[2024-07-13 00:30:17.956403] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:06.648 [2024-07-13 00:30:17.956417] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:06.648 [2024-07-13 00:30:17.956454] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:06.648 [2024-07-13 00:30:17.956461] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:06.648 passed 00:05:06.648 Test: mem map registration ...[2024-07-13 00:30:17.993652] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:06.648 [2024-07-13 00:30:17.993673] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:06.648 passed 00:05:06.648 Test: mem map adjacent registrations ...passed 00:05:06.648 00:05:06.648 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.648 suites 1 1 n/a 0 0 00:05:06.648 tests 4 4 4 0 0 00:05:06.648 asserts 152 152 152 0 n/a 00:05:06.648 00:05:06.648 Elapsed time = 0.137 seconds 00:05:06.648 00:05:06.648 real 0m0.149s 00:05:06.648 user 0m0.139s 00:05:06.648 sys 0m0.010s 00:05:06.648 00:30:18 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.648 00:30:18 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:06.648 ************************************ 00:05:06.648 END TEST env_memory 00:05:06.648 ************************************ 00:05:06.648 00:30:18 env -- common/autotest_common.sh@1142 -- # return 0 00:05:06.648 00:30:18 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:06.648 00:30:18 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.648 00:30:18 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.648 00:30:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.648 ************************************ 00:05:06.648 START TEST env_vtophys 00:05:06.648 ************************************ 00:05:06.648 00:30:18 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:06.648 EAL: lib.eal log level changed from notice to debug 00:05:06.648 EAL: Detected lcore 0 as core 0 on socket 0 00:05:06.648 EAL: Detected lcore 1 as core 1 on socket 0 00:05:06.648 EAL: Detected lcore 2 as core 2 on socket 0 00:05:06.648 EAL: Detected lcore 3 as core 3 on socket 0 00:05:06.648 EAL: Detected lcore 4 as core 4 on socket 0 00:05:06.648 EAL: Detected lcore 5 as core 5 on socket 0 00:05:06.648 EAL: Detected lcore 6 as core 6 on socket 0 00:05:06.648 EAL: Detected lcore 7 as core 8 on socket 0 00:05:06.648 EAL: Detected lcore 8 as core 9 on socket 0 00:05:06.648 EAL: Detected lcore 9 as core 10 on socket 0 00:05:06.648 EAL: Detected lcore 10 as core 11 on socket 0 00:05:06.648 EAL: Detected lcore 11 as core 12 on socket 0 00:05:06.648 EAL: Detected lcore 12 as core 13 on socket 0 00:05:06.648 EAL: Detected lcore 13 as core 16 on socket 0 00:05:06.648 EAL: Detected lcore 14 as core 17 on socket 0 00:05:06.648 EAL: Detected lcore 15 as core 18 on socket 0 00:05:06.648 EAL: Detected lcore 16 as core 19 on socket 0 00:05:06.648 EAL: Detected lcore 17 as core 20 on socket 0 00:05:06.648 EAL: Detected lcore 18 as core 21 on socket 0 00:05:06.648 EAL: Detected lcore 19 as core 25 on socket 0 00:05:06.648 EAL: Detected lcore 20 as core 26 on socket 0 00:05:06.648 EAL: Detected lcore 21 as core 27 on socket 0 00:05:06.648 EAL: Detected lcore 22 as core 28 on socket 0 00:05:06.648 EAL: Detected lcore 23 as core 29 on socket 0 00:05:06.648 EAL: Detected lcore 24 as core 0 on socket 1 00:05:06.648 EAL: Detected lcore 25 as core 1 on socket 1 00:05:06.648 EAL: Detected lcore 26 as core 2 on socket 1 00:05:06.648 EAL: Detected lcore 27 as core 3 on socket 1 00:05:06.648 EAL: Detected lcore 28 as core 4 on socket 1 00:05:06.648 EAL: Detected lcore 29 as core 5 on socket 1 00:05:06.648 EAL: Detected lcore 30 as core 6 on socket 1 00:05:06.648 EAL: Detected lcore 31 as core 9 on socket 1 00:05:06.648 EAL: Detected lcore 32 as core 10 on socket 1 00:05:06.648 EAL: Detected lcore 33 as core 11 on socket 1 00:05:06.648 EAL: Detected lcore 34 as core 12 on socket 1 00:05:06.648 EAL: Detected lcore 35 as core 13 on socket 1 00:05:06.648 EAL: Detected lcore 36 as core 16 on socket 1 00:05:06.648 EAL: Detected lcore 37 as core 17 on socket 1 00:05:06.648 EAL: Detected lcore 38 as core 18 on socket 1 00:05:06.648 EAL: Detected lcore 39 as core 19 on socket 1 00:05:06.648 EAL: Detected lcore 40 as core 20 on socket 1 00:05:06.648 EAL: Detected lcore 41 as core 21 on socket 1 00:05:06.648 EAL: Detected lcore 42 as core 24 on socket 1 00:05:06.648 EAL: Detected lcore 43 as core 25 on socket 1 00:05:06.648 EAL: Detected lcore 44 as core 26 on socket 1 00:05:06.648 EAL: Detected lcore 45 as core 27 on socket 1 00:05:06.648 EAL: Detected lcore 46 as core 28 on socket 1 00:05:06.648 EAL: Detected lcore 47 as core 29 on socket 1 00:05:06.648 EAL: Detected lcore 48 as core 0 on socket 0 00:05:06.648 EAL: Detected lcore 49 as core 1 on socket 0 00:05:06.648 EAL: Detected lcore 50 as core 2 on socket 0 00:05:06.648 EAL: Detected lcore 51 as core 3 on socket 0 00:05:06.648 EAL: Detected lcore 52 as core 4 on socket 0 00:05:06.648 EAL: Detected lcore 53 as core 5 on socket 0 00:05:06.648 EAL: Detected lcore 54 as core 6 on socket 0 00:05:06.648 EAL: Detected lcore 55 as core 8 on socket 0 00:05:06.648 EAL: Detected lcore 56 as core 9 on socket 0 00:05:06.648 EAL: Detected lcore 57 as core 10 on socket 0 00:05:06.648 EAL: Detected lcore 58 as core 11 on socket 0 00:05:06.648 EAL: Detected lcore 59 as core 12 on socket 0 00:05:06.648 EAL: Detected lcore 60 as core 13 on socket 0 00:05:06.648 EAL: Detected lcore 61 as core 16 on socket 0 00:05:06.648 EAL: Detected lcore 62 as core 17 on socket 0 00:05:06.648 EAL: Detected lcore 63 as core 18 on socket 0 00:05:06.648 EAL: Detected lcore 64 as core 19 on socket 0 00:05:06.648 EAL: Detected lcore 65 as core 20 on socket 0 00:05:06.648 EAL: Detected lcore 66 as core 21 on socket 0 00:05:06.648 EAL: Detected lcore 67 as core 25 on socket 0 00:05:06.648 EAL: Detected lcore 68 as core 26 on socket 0 00:05:06.648 EAL: Detected lcore 69 as core 27 on socket 0 00:05:06.648 EAL: Detected lcore 70 as core 28 on socket 0 00:05:06.648 EAL: Detected lcore 71 as core 29 on socket 0 00:05:06.648 EAL: Detected lcore 72 as core 0 on socket 1 00:05:06.648 EAL: Detected lcore 73 as core 1 on socket 1 00:05:06.648 EAL: Detected lcore 74 as core 2 on socket 1 00:05:06.648 EAL: Detected lcore 75 as core 3 on socket 1 00:05:06.648 EAL: Detected lcore 76 as core 4 on socket 1 00:05:06.649 EAL: Detected lcore 77 as core 5 on socket 1 00:05:06.649 EAL: Detected lcore 78 as core 6 on socket 1 00:05:06.649 EAL: Detected lcore 79 as core 9 on socket 1 00:05:06.649 EAL: Detected lcore 80 as core 10 on socket 1 00:05:06.649 EAL: Detected lcore 81 as core 11 on socket 1 00:05:06.649 EAL: Detected lcore 82 as core 12 on socket 1 00:05:06.649 EAL: Detected lcore 83 as core 13 on socket 1 00:05:06.649 EAL: Detected lcore 84 as core 16 on socket 1 00:05:06.649 EAL: Detected lcore 85 as core 17 on socket 1 00:05:06.649 EAL: Detected lcore 86 as core 18 on socket 1 00:05:06.649 EAL: Detected lcore 87 as core 19 on socket 1 00:05:06.649 EAL: Detected lcore 88 as core 20 on socket 1 00:05:06.649 EAL: Detected lcore 89 as core 21 on socket 1 00:05:06.649 EAL: Detected lcore 90 as core 24 on socket 1 00:05:06.649 EAL: Detected lcore 91 as core 25 on socket 1 00:05:06.649 EAL: Detected lcore 92 as core 26 on socket 1 00:05:06.649 EAL: Detected lcore 93 as core 27 on socket 1 00:05:06.649 EAL: Detected lcore 94 as core 28 on socket 1 00:05:06.649 EAL: Detected lcore 95 as core 29 on socket 1 00:05:06.649 EAL: Maximum logical cores by configuration: 128 00:05:06.649 EAL: Detected CPU lcores: 96 00:05:06.649 EAL: Detected NUMA nodes: 2 00:05:06.649 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:06.649 EAL: Detected shared linkage of DPDK 00:05:06.649 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:06.649 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:06.649 EAL: Registered [vdev] bus. 00:05:06.649 EAL: bus.vdev log level changed from disabled to notice 00:05:06.649 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:06.649 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:06.649 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:06.649 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:06.649 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:06.649 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:06.649 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:06.649 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:06.649 EAL: No shared files mode enabled, IPC will be disabled 00:05:06.649 EAL: No shared files mode enabled, IPC is disabled 00:05:06.649 EAL: Bus pci wants IOVA as 'DC' 00:05:06.649 EAL: Bus vdev wants IOVA as 'DC' 00:05:06.649 EAL: Buses did not request a specific IOVA mode. 00:05:06.649 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:06.649 EAL: Selected IOVA mode 'VA' 00:05:06.649 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.649 EAL: Probing VFIO support... 00:05:06.649 EAL: IOMMU type 1 (Type 1) is supported 00:05:06.649 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:06.649 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:06.649 EAL: VFIO support initialized 00:05:06.649 EAL: Ask a virtual area of 0x2e000 bytes 00:05:06.649 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:06.649 EAL: Setting up physically contiguous memory... 00:05:06.649 EAL: Setting maximum number of open files to 524288 00:05:06.649 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:06.649 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:06.649 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:06.649 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.649 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:06.649 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.649 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.649 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:06.649 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:06.649 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.649 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:06.649 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.649 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.649 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:06.649 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:06.649 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.649 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:06.649 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.649 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.649 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:06.649 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:06.649 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.649 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:06.649 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.649 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.649 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:06.649 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:06.649 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:06.649 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.649 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:06.649 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:06.649 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.649 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:06.649 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:06.649 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.649 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:06.649 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:06.649 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.649 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:06.649 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:06.649 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.649 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:06.649 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:06.649 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.649 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:06.649 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:06.649 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.649 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:06.649 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:06.649 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.649 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:06.649 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:06.649 EAL: Hugepages will be freed exactly as allocated. 00:05:06.649 EAL: No shared files mode enabled, IPC is disabled 00:05:06.649 EAL: No shared files mode enabled, IPC is disabled 00:05:06.649 EAL: TSC frequency is ~2300000 KHz 00:05:06.649 EAL: Main lcore 0 is ready (tid=7fcad6209a00;cpuset=[0]) 00:05:06.649 EAL: Trying to obtain current memory policy. 00:05:06.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.649 EAL: Restoring previous memory policy: 0 00:05:06.649 EAL: request: mp_malloc_sync 00:05:06.649 EAL: No shared files mode enabled, IPC is disabled 00:05:06.649 EAL: Heap on socket 0 was expanded by 2MB 00:05:06.649 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:05:06.649 EAL: probe driver: 8086:37d2 net_i40e 00:05:06.649 EAL: Not managed by a supported kernel driver, skipped 00:05:06.649 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:05:06.649 EAL: probe driver: 8086:37d2 net_i40e 00:05:06.649 EAL: Not managed by a supported kernel driver, skipped 00:05:06.649 EAL: No shared files mode enabled, IPC is disabled 00:05:06.649 EAL: No shared files mode enabled, IPC is disabled 00:05:06.649 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:06.649 EAL: Mem event callback 'spdk:(nil)' registered 00:05:06.649 00:05:06.649 00:05:06.649 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.649 http://cunit.sourceforge.net/ 00:05:06.649 00:05:06.649 00:05:06.649 Suite: components_suite 00:05:06.649 Test: vtophys_malloc_test ...passed 00:05:06.649 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:06.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.649 EAL: Restoring previous memory policy: 4 00:05:06.649 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.649 EAL: request: mp_malloc_sync 00:05:06.649 EAL: No shared files mode enabled, IPC is disabled 00:05:06.649 EAL: Heap on socket 0 was expanded by 4MB 00:05:06.649 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.649 EAL: request: mp_malloc_sync 00:05:06.649 EAL: No shared files mode enabled, IPC is disabled 00:05:06.649 EAL: Heap on socket 0 was shrunk by 4MB 00:05:06.649 EAL: Trying to obtain current memory policy. 00:05:06.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.649 EAL: Restoring previous memory policy: 4 00:05:06.649 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.649 EAL: request: mp_malloc_sync 00:05:06.649 EAL: No shared files mode enabled, IPC is disabled 00:05:06.649 EAL: Heap on socket 0 was expanded by 6MB 00:05:06.649 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.649 EAL: request: mp_malloc_sync 00:05:06.649 EAL: No shared files mode enabled, IPC is disabled 00:05:06.649 EAL: Heap on socket 0 was shrunk by 6MB 00:05:06.649 EAL: Trying to obtain current memory policy. 00:05:06.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.649 EAL: Restoring previous memory policy: 4 00:05:06.649 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.649 EAL: request: mp_malloc_sync 00:05:06.649 EAL: No shared files mode enabled, IPC is disabled 00:05:06.649 EAL: Heap on socket 0 was expanded by 10MB 00:05:06.649 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.649 EAL: request: mp_malloc_sync 00:05:06.649 EAL: No shared files mode enabled, IPC is disabled 00:05:06.649 EAL: Heap on socket 0 was shrunk by 10MB 00:05:06.649 EAL: Trying to obtain current memory policy. 00:05:06.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.909 EAL: Restoring previous memory policy: 4 00:05:06.909 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.909 EAL: request: mp_malloc_sync 00:05:06.909 EAL: No shared files mode enabled, IPC is disabled 00:05:06.909 EAL: Heap on socket 0 was expanded by 18MB 00:05:06.909 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.909 EAL: request: mp_malloc_sync 00:05:06.909 EAL: No shared files mode enabled, IPC is disabled 00:05:06.909 EAL: Heap on socket 0 was shrunk by 18MB 00:05:06.909 EAL: Trying to obtain current memory policy. 00:05:06.909 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.909 EAL: Restoring previous memory policy: 4 00:05:06.909 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.909 EAL: request: mp_malloc_sync 00:05:06.909 EAL: No shared files mode enabled, IPC is disabled 00:05:06.909 EAL: Heap on socket 0 was expanded by 34MB 00:05:06.909 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.909 EAL: request: mp_malloc_sync 00:05:06.909 EAL: No shared files mode enabled, IPC is disabled 00:05:06.909 EAL: Heap on socket 0 was shrunk by 34MB 00:05:06.909 EAL: Trying to obtain current memory policy. 00:05:06.909 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.909 EAL: Restoring previous memory policy: 4 00:05:06.909 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.909 EAL: request: mp_malloc_sync 00:05:06.909 EAL: No shared files mode enabled, IPC is disabled 00:05:06.909 EAL: Heap on socket 0 was expanded by 66MB 00:05:06.909 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.909 EAL: request: mp_malloc_sync 00:05:06.909 EAL: No shared files mode enabled, IPC is disabled 00:05:06.909 EAL: Heap on socket 0 was shrunk by 66MB 00:05:06.909 EAL: Trying to obtain current memory policy. 00:05:06.909 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.909 EAL: Restoring previous memory policy: 4 00:05:06.909 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.909 EAL: request: mp_malloc_sync 00:05:06.909 EAL: No shared files mode enabled, IPC is disabled 00:05:06.909 EAL: Heap on socket 0 was expanded by 130MB 00:05:06.909 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.909 EAL: request: mp_malloc_sync 00:05:06.909 EAL: No shared files mode enabled, IPC is disabled 00:05:06.909 EAL: Heap on socket 0 was shrunk by 130MB 00:05:06.909 EAL: Trying to obtain current memory policy. 00:05:06.909 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.909 EAL: Restoring previous memory policy: 4 00:05:06.909 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.909 EAL: request: mp_malloc_sync 00:05:06.909 EAL: No shared files mode enabled, IPC is disabled 00:05:06.909 EAL: Heap on socket 0 was expanded by 258MB 00:05:06.909 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.909 EAL: request: mp_malloc_sync 00:05:06.909 EAL: No shared files mode enabled, IPC is disabled 00:05:06.909 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.909 EAL: Trying to obtain current memory policy. 00:05:06.909 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.168 EAL: Restoring previous memory policy: 4 00:05:07.168 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.168 EAL: request: mp_malloc_sync 00:05:07.168 EAL: No shared files mode enabled, IPC is disabled 00:05:07.168 EAL: Heap on socket 0 was expanded by 514MB 00:05:07.168 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.168 EAL: request: mp_malloc_sync 00:05:07.168 EAL: No shared files mode enabled, IPC is disabled 00:05:07.168 EAL: Heap on socket 0 was shrunk by 514MB 00:05:07.168 EAL: Trying to obtain current memory policy. 00:05:07.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.427 EAL: Restoring previous memory policy: 4 00:05:07.427 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.427 EAL: request: mp_malloc_sync 00:05:07.427 EAL: No shared files mode enabled, IPC is disabled 00:05:07.427 EAL: Heap on socket 0 was expanded by 1026MB 00:05:07.686 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.686 EAL: request: mp_malloc_sync 00:05:07.686 EAL: No shared files mode enabled, IPC is disabled 00:05:07.686 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:07.686 passed 00:05:07.686 00:05:07.686 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.686 suites 1 1 n/a 0 0 00:05:07.686 tests 2 2 2 0 0 00:05:07.686 asserts 497 497 497 0 n/a 00:05:07.686 00:05:07.686 Elapsed time = 0.967 seconds 00:05:07.686 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.686 EAL: request: mp_malloc_sync 00:05:07.686 EAL: No shared files mode enabled, IPC is disabled 00:05:07.686 EAL: Heap on socket 0 was shrunk by 2MB 00:05:07.686 EAL: No shared files mode enabled, IPC is disabled 00:05:07.686 EAL: No shared files mode enabled, IPC is disabled 00:05:07.686 EAL: No shared files mode enabled, IPC is disabled 00:05:07.686 00:05:07.686 real 0m1.091s 00:05:07.686 user 0m0.629s 00:05:07.686 sys 0m0.431s 00:05:07.686 00:30:19 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.686 00:30:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:07.686 ************************************ 00:05:07.686 END TEST env_vtophys 00:05:07.686 ************************************ 00:05:07.686 00:30:19 env -- common/autotest_common.sh@1142 -- # return 0 00:05:07.686 00:30:19 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:07.686 00:30:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.686 00:30:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.686 00:30:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.945 ************************************ 00:05:07.945 START TEST env_pci 00:05:07.945 ************************************ 00:05:07.945 00:30:19 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:07.945 00:05:07.945 00:05:07.945 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.945 http://cunit.sourceforge.net/ 00:05:07.945 00:05:07.945 00:05:07.945 Suite: pci 00:05:07.945 Test: pci_hook ...[2024-07-13 00:30:19.286579] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1195049 has claimed it 00:05:07.945 EAL: Cannot find device (10000:00:01.0) 00:05:07.945 EAL: Failed to attach device on primary process 00:05:07.945 passed 00:05:07.945 00:05:07.945 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.945 suites 1 1 n/a 0 0 00:05:07.945 tests 1 1 1 0 0 00:05:07.945 asserts 25 25 25 0 n/a 00:05:07.945 00:05:07.945 Elapsed time = 0.026 seconds 00:05:07.945 00:05:07.945 real 0m0.044s 00:05:07.945 user 0m0.010s 00:05:07.945 sys 0m0.034s 00:05:07.945 00:30:19 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.945 00:30:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:07.945 ************************************ 00:05:07.945 END TEST env_pci 00:05:07.945 ************************************ 00:05:07.945 00:30:19 env -- common/autotest_common.sh@1142 -- # return 0 00:05:07.945 00:30:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:07.945 00:30:19 env -- env/env.sh@15 -- # uname 00:05:07.945 00:30:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:07.945 00:30:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:07.945 00:30:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.945 00:30:19 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:07.945 00:30:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.945 00:30:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.945 ************************************ 00:05:07.945 START TEST env_dpdk_post_init 00:05:07.945 ************************************ 00:05:07.945 00:30:19 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.945 EAL: Detected CPU lcores: 96 00:05:07.945 EAL: Detected NUMA nodes: 2 00:05:07.945 EAL: Detected shared linkage of DPDK 00:05:07.945 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:07.945 EAL: Selected IOVA mode 'VA' 00:05:07.945 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.945 EAL: VFIO support initialized 00:05:07.945 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.204 EAL: Using IOMMU type 1 (Type 1) 00:05:08.204 EAL: Ignore mapping IO port bar(1) 00:05:08.204 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:08.204 EAL: Ignore mapping IO port bar(1) 00:05:08.204 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:08.204 EAL: Ignore mapping IO port bar(1) 00:05:08.204 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:08.204 EAL: Ignore mapping IO port bar(1) 00:05:08.204 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:08.204 EAL: Ignore mapping IO port bar(1) 00:05:08.204 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:08.204 EAL: Ignore mapping IO port bar(1) 00:05:08.204 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:08.204 EAL: Ignore mapping IO port bar(1) 00:05:08.204 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:08.204 EAL: Ignore mapping IO port bar(1) 00:05:08.204 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:09.141 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:09.141 EAL: Ignore mapping IO port bar(1) 00:05:09.141 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:09.141 EAL: Ignore mapping IO port bar(1) 00:05:09.141 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:09.141 EAL: Ignore mapping IO port bar(1) 00:05:09.142 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:09.142 EAL: Ignore mapping IO port bar(1) 00:05:09.142 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:09.142 EAL: Ignore mapping IO port bar(1) 00:05:09.142 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:09.142 EAL: Ignore mapping IO port bar(1) 00:05:09.142 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:09.142 EAL: Ignore mapping IO port bar(1) 00:05:09.142 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:09.142 EAL: Ignore mapping IO port bar(1) 00:05:09.142 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:12.431 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:12.431 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:12.431 Starting DPDK initialization... 00:05:12.431 Starting SPDK post initialization... 00:05:12.431 SPDK NVMe probe 00:05:12.431 Attaching to 0000:5e:00.0 00:05:12.431 Attached to 0000:5e:00.0 00:05:12.431 Cleaning up... 00:05:12.431 00:05:12.431 real 0m4.322s 00:05:12.431 user 0m3.248s 00:05:12.431 sys 0m0.150s 00:05:12.431 00:30:23 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.431 00:30:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.431 ************************************ 00:05:12.431 END TEST env_dpdk_post_init 00:05:12.431 ************************************ 00:05:12.431 00:30:23 env -- common/autotest_common.sh@1142 -- # return 0 00:05:12.431 00:30:23 env -- env/env.sh@26 -- # uname 00:05:12.431 00:30:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:12.431 00:30:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.431 00:30:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.431 00:30:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.431 00:30:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.431 ************************************ 00:05:12.431 START TEST env_mem_callbacks 00:05:12.431 ************************************ 00:05:12.431 00:30:23 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.431 EAL: Detected CPU lcores: 96 00:05:12.431 EAL: Detected NUMA nodes: 2 00:05:12.431 EAL: Detected shared linkage of DPDK 00:05:12.431 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.431 EAL: Selected IOVA mode 'VA' 00:05:12.431 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.431 EAL: VFIO support initialized 00:05:12.431 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.431 00:05:12.431 00:05:12.431 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.431 http://cunit.sourceforge.net/ 00:05:12.431 00:05:12.431 00:05:12.431 Suite: memory 00:05:12.431 Test: test ... 00:05:12.431 register 0x200000200000 2097152 00:05:12.431 malloc 3145728 00:05:12.431 register 0x200000400000 4194304 00:05:12.431 buf 0x200000500000 len 3145728 PASSED 00:05:12.431 malloc 64 00:05:12.431 buf 0x2000004fff40 len 64 PASSED 00:05:12.431 malloc 4194304 00:05:12.431 register 0x200000800000 6291456 00:05:12.431 buf 0x200000a00000 len 4194304 PASSED 00:05:12.431 free 0x200000500000 3145728 00:05:12.431 free 0x2000004fff40 64 00:05:12.431 unregister 0x200000400000 4194304 PASSED 00:05:12.431 free 0x200000a00000 4194304 00:05:12.431 unregister 0x200000800000 6291456 PASSED 00:05:12.431 malloc 8388608 00:05:12.432 register 0x200000400000 10485760 00:05:12.432 buf 0x200000600000 len 8388608 PASSED 00:05:12.432 free 0x200000600000 8388608 00:05:12.432 unregister 0x200000400000 10485760 PASSED 00:05:12.432 passed 00:05:12.432 00:05:12.432 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.432 suites 1 1 n/a 0 0 00:05:12.432 tests 1 1 1 0 0 00:05:12.432 asserts 15 15 15 0 n/a 00:05:12.432 00:05:12.432 Elapsed time = 0.007 seconds 00:05:12.432 00:05:12.432 real 0m0.056s 00:05:12.432 user 0m0.024s 00:05:12.432 sys 0m0.032s 00:05:12.432 00:30:23 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.432 00:30:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:12.432 ************************************ 00:05:12.432 END TEST env_mem_callbacks 00:05:12.432 ************************************ 00:05:12.432 00:30:23 env -- common/autotest_common.sh@1142 -- # return 0 00:05:12.432 00:05:12.432 real 0m6.105s 00:05:12.432 user 0m4.235s 00:05:12.432 sys 0m0.944s 00:05:12.432 00:30:23 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.432 00:30:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.432 ************************************ 00:05:12.432 END TEST env 00:05:12.432 ************************************ 00:05:12.432 00:30:23 -- common/autotest_common.sh@1142 -- # return 0 00:05:12.432 00:30:23 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:12.432 00:30:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.432 00:30:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.432 00:30:23 -- common/autotest_common.sh@10 -- # set +x 00:05:12.432 ************************************ 00:05:12.432 START TEST rpc 00:05:12.432 ************************************ 00:05:12.432 00:30:23 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:12.692 * Looking for test storage... 00:05:12.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.692 00:30:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1195875 00:05:12.692 00:30:24 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:12.692 00:30:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.692 00:30:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1195875 00:05:12.692 00:30:24 rpc -- common/autotest_common.sh@829 -- # '[' -z 1195875 ']' 00:05:12.692 00:30:24 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.692 00:30:24 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.692 00:30:24 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.692 00:30:24 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.692 00:30:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.692 [2024-07-13 00:30:24.082434] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:12.692 [2024-07-13 00:30:24.082477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195875 ] 00:05:12.692 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.692 [2024-07-13 00:30:24.150670] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.692 [2024-07-13 00:30:24.190484] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:12.692 [2024-07-13 00:30:24.190523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1195875' to capture a snapshot of events at runtime. 00:05:12.692 [2024-07-13 00:30:24.190529] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:12.692 [2024-07-13 00:30:24.190534] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:12.692 [2024-07-13 00:30:24.190539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1195875 for offline analysis/debug. 00:05:12.692 [2024-07-13 00:30:24.190563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.629 00:30:24 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.629 00:30:24 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:13.629 00:30:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:13.630 00:30:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:13.630 00:30:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:13.630 00:30:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:13.630 00:30:24 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.630 00:30:24 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.630 00:30:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.630 ************************************ 00:05:13.630 START TEST rpc_integrity 00:05:13.630 ************************************ 00:05:13.630 00:30:24 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:13.630 00:30:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.630 00:30:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.630 00:30:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.630 00:30:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.630 00:30:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.630 00:30:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:13.630 00:30:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.630 00:30:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.630 00:30:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.630 00:30:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.630 00:30:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.630 00:30:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:13.630 00:30:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.630 00:30:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.630 00:30:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.630 00:30:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.630 00:30:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.630 { 00:05:13.630 "name": "Malloc0", 00:05:13.630 "aliases": [ 00:05:13.630 "4e3aaa2c-2d54-4a62-96d3-69e6a38a64b8" 00:05:13.630 ], 00:05:13.630 "product_name": "Malloc disk", 00:05:13.630 "block_size": 512, 00:05:13.630 "num_blocks": 16384, 00:05:13.630 "uuid": "4e3aaa2c-2d54-4a62-96d3-69e6a38a64b8", 00:05:13.630 "assigned_rate_limits": { 00:05:13.630 "rw_ios_per_sec": 0, 00:05:13.630 "rw_mbytes_per_sec": 0, 00:05:13.630 "r_mbytes_per_sec": 0, 00:05:13.630 "w_mbytes_per_sec": 0 00:05:13.630 }, 00:05:13.630 "claimed": false, 00:05:13.630 "zoned": false, 00:05:13.630 "supported_io_types": { 00:05:13.630 "read": true, 00:05:13.630 "write": true, 00:05:13.630 "unmap": true, 00:05:13.630 "flush": true, 00:05:13.630 "reset": true, 00:05:13.630 "nvme_admin": false, 00:05:13.630 "nvme_io": false, 00:05:13.630 "nvme_io_md": false, 00:05:13.630 "write_zeroes": true, 00:05:13.630 "zcopy": true, 00:05:13.630 "get_zone_info": false, 00:05:13.630 "zone_management": false, 00:05:13.630 "zone_append": false, 00:05:13.630 "compare": false, 00:05:13.630 "compare_and_write": false, 00:05:13.630 "abort": true, 00:05:13.630 "seek_hole": false, 00:05:13.630 "seek_data": false, 00:05:13.630 "copy": true, 00:05:13.630 "nvme_iov_md": false 00:05:13.630 }, 00:05:13.630 "memory_domains": [ 00:05:13.630 { 00:05:13.630 "dma_device_id": "system", 00:05:13.630 "dma_device_type": 1 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.630 "dma_device_type": 2 00:05:13.630 } 00:05:13.630 ], 00:05:13.630 "driver_specific": {} 00:05:13.630 } 00:05:13.630 ]' 00:05:13.630 00:30:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.630 00:30:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.630 00:30:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:13.630 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.630 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.630 [2024-07-13 00:30:25.046633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:13.630 [2024-07-13 00:30:25.046666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.630 [2024-07-13 00:30:25.046679] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14c2c60 00:05:13.630 [2024-07-13 00:30:25.046685] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.630 [2024-07-13 00:30:25.047751] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.630 [2024-07-13 00:30:25.047771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.630 Passthru0 00:05:13.630 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.630 00:30:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.630 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.630 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.630 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.630 00:30:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.630 { 00:05:13.630 "name": "Malloc0", 00:05:13.630 "aliases": [ 00:05:13.630 "4e3aaa2c-2d54-4a62-96d3-69e6a38a64b8" 00:05:13.630 ], 00:05:13.630 "product_name": "Malloc disk", 00:05:13.630 "block_size": 512, 00:05:13.630 "num_blocks": 16384, 00:05:13.630 "uuid": "4e3aaa2c-2d54-4a62-96d3-69e6a38a64b8", 00:05:13.630 "assigned_rate_limits": { 00:05:13.630 "rw_ios_per_sec": 0, 00:05:13.630 "rw_mbytes_per_sec": 0, 00:05:13.630 "r_mbytes_per_sec": 0, 00:05:13.630 "w_mbytes_per_sec": 0 00:05:13.630 }, 00:05:13.630 "claimed": true, 00:05:13.630 "claim_type": "exclusive_write", 00:05:13.630 "zoned": false, 00:05:13.630 "supported_io_types": { 00:05:13.630 "read": true, 00:05:13.630 "write": true, 00:05:13.630 "unmap": true, 00:05:13.630 "flush": true, 00:05:13.630 "reset": true, 00:05:13.630 "nvme_admin": false, 00:05:13.630 "nvme_io": false, 00:05:13.630 "nvme_io_md": false, 00:05:13.630 "write_zeroes": true, 00:05:13.630 "zcopy": true, 00:05:13.630 "get_zone_info": false, 00:05:13.630 "zone_management": false, 00:05:13.630 "zone_append": false, 00:05:13.630 "compare": false, 00:05:13.630 "compare_and_write": false, 00:05:13.630 "abort": true, 00:05:13.630 "seek_hole": false, 00:05:13.630 "seek_data": false, 00:05:13.630 "copy": true, 00:05:13.630 "nvme_iov_md": false 00:05:13.630 }, 00:05:13.630 "memory_domains": [ 00:05:13.630 { 00:05:13.630 "dma_device_id": "system", 00:05:13.630 "dma_device_type": 1 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.630 "dma_device_type": 2 00:05:13.630 } 00:05:13.630 ], 00:05:13.630 "driver_specific": {} 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "name": "Passthru0", 00:05:13.630 "aliases": [ 00:05:13.630 "1fb26969-0a8d-5d06-bc0f-dd28ee5eaaa1" 00:05:13.630 ], 00:05:13.630 "product_name": "passthru", 00:05:13.630 "block_size": 512, 00:05:13.630 "num_blocks": 16384, 00:05:13.630 "uuid": "1fb26969-0a8d-5d06-bc0f-dd28ee5eaaa1", 00:05:13.630 "assigned_rate_limits": { 00:05:13.630 "rw_ios_per_sec": 0, 00:05:13.630 "rw_mbytes_per_sec": 0, 00:05:13.630 "r_mbytes_per_sec": 0, 00:05:13.630 "w_mbytes_per_sec": 0 00:05:13.630 }, 00:05:13.630 "claimed": false, 00:05:13.630 "zoned": false, 00:05:13.630 "supported_io_types": { 00:05:13.630 "read": true, 00:05:13.630 "write": true, 00:05:13.630 "unmap": true, 00:05:13.630 "flush": true, 00:05:13.630 "reset": true, 00:05:13.630 "nvme_admin": false, 00:05:13.630 "nvme_io": false, 00:05:13.630 "nvme_io_md": false, 00:05:13.630 "write_zeroes": true, 00:05:13.630 "zcopy": true, 00:05:13.630 "get_zone_info": false, 00:05:13.630 "zone_management": false, 00:05:13.630 "zone_append": false, 00:05:13.630 "compare": false, 00:05:13.630 "compare_and_write": false, 00:05:13.630 "abort": true, 00:05:13.630 "seek_hole": false, 00:05:13.630 "seek_data": false, 00:05:13.630 "copy": true, 00:05:13.630 "nvme_iov_md": false 00:05:13.630 }, 00:05:13.630 "memory_domains": [ 00:05:13.630 { 00:05:13.630 "dma_device_id": "system", 00:05:13.630 "dma_device_type": 1 00:05:13.630 }, 00:05:13.630 { 00:05:13.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.630 "dma_device_type": 2 00:05:13.630 } 00:05:13.630 ], 00:05:13.630 "driver_specific": { 00:05:13.630 "passthru": { 00:05:13.630 "name": "Passthru0", 00:05:13.630 "base_bdev_name": "Malloc0" 00:05:13.630 } 00:05:13.630 } 00:05:13.630 } 00:05:13.630 ]' 00:05:13.630 00:30:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:13.630 00:30:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.630 00:30:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.630 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.630 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.630 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.630 00:30:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:13.630 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.630 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.630 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.630 00:30:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.630 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.630 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.630 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.630 00:30:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.630 00:30:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.890 00:30:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.890 00:05:13.890 real 0m0.280s 00:05:13.890 user 0m0.186s 00:05:13.890 sys 0m0.033s 00:05:13.890 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.890 00:30:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.890 ************************************ 00:05:13.890 END TEST rpc_integrity 00:05:13.890 ************************************ 00:05:13.890 00:30:25 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:13.890 00:30:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:13.890 00:30:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.890 00:30:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.890 00:30:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.890 ************************************ 00:05:13.890 START TEST rpc_plugins 00:05:13.890 ************************************ 00:05:13.890 00:30:25 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:13.890 00:30:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:13.890 00:30:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.890 00:30:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.890 00:30:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.890 00:30:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:13.890 00:30:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:13.890 00:30:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.890 00:30:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.890 00:30:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.890 00:30:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:13.890 { 00:05:13.890 "name": "Malloc1", 00:05:13.890 "aliases": [ 00:05:13.890 "3850cbb5-b694-4954-8278-763e75cca3be" 00:05:13.890 ], 00:05:13.890 "product_name": "Malloc disk", 00:05:13.890 "block_size": 4096, 00:05:13.890 "num_blocks": 256, 00:05:13.890 "uuid": "3850cbb5-b694-4954-8278-763e75cca3be", 00:05:13.890 "assigned_rate_limits": { 00:05:13.890 "rw_ios_per_sec": 0, 00:05:13.890 "rw_mbytes_per_sec": 0, 00:05:13.890 "r_mbytes_per_sec": 0, 00:05:13.891 "w_mbytes_per_sec": 0 00:05:13.891 }, 00:05:13.891 "claimed": false, 00:05:13.891 "zoned": false, 00:05:13.891 "supported_io_types": { 00:05:13.891 "read": true, 00:05:13.891 "write": true, 00:05:13.891 "unmap": true, 00:05:13.891 "flush": true, 00:05:13.891 "reset": true, 00:05:13.891 "nvme_admin": false, 00:05:13.891 "nvme_io": false, 00:05:13.891 "nvme_io_md": false, 00:05:13.891 "write_zeroes": true, 00:05:13.891 "zcopy": true, 00:05:13.891 "get_zone_info": false, 00:05:13.891 "zone_management": false, 00:05:13.891 "zone_append": false, 00:05:13.891 "compare": false, 00:05:13.891 "compare_and_write": false, 00:05:13.891 "abort": true, 00:05:13.891 "seek_hole": false, 00:05:13.891 "seek_data": false, 00:05:13.891 "copy": true, 00:05:13.891 "nvme_iov_md": false 00:05:13.891 }, 00:05:13.891 "memory_domains": [ 00:05:13.891 { 00:05:13.891 "dma_device_id": "system", 00:05:13.891 "dma_device_type": 1 00:05:13.891 }, 00:05:13.891 { 00:05:13.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.891 "dma_device_type": 2 00:05:13.891 } 00:05:13.891 ], 00:05:13.891 "driver_specific": {} 00:05:13.891 } 00:05:13.891 ]' 00:05:13.891 00:30:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:13.891 00:30:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:13.891 00:30:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:13.891 00:30:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.891 00:30:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.891 00:30:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.891 00:30:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:13.891 00:30:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.891 00:30:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.891 00:30:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.891 00:30:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:13.891 00:30:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:13.891 00:30:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:13.891 00:05:13.891 real 0m0.138s 00:05:13.891 user 0m0.087s 00:05:13.891 sys 0m0.019s 00:05:13.891 00:30:25 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.891 00:30:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.891 ************************************ 00:05:13.891 END TEST rpc_plugins 00:05:13.891 ************************************ 00:05:13.891 00:30:25 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:13.891 00:30:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:13.891 00:30:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.891 00:30:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.891 00:30:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.150 ************************************ 00:05:14.150 START TEST rpc_trace_cmd_test 00:05:14.150 ************************************ 00:05:14.150 00:30:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:14.150 00:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:14.150 00:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:14.150 00:30:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.150 00:30:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:14.150 00:30:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.150 00:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:14.150 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1195875", 00:05:14.150 "tpoint_group_mask": "0x8", 00:05:14.150 "iscsi_conn": { 00:05:14.150 "mask": "0x2", 00:05:14.150 "tpoint_mask": "0x0" 00:05:14.150 }, 00:05:14.150 "scsi": { 00:05:14.150 "mask": "0x4", 00:05:14.150 "tpoint_mask": "0x0" 00:05:14.150 }, 00:05:14.150 "bdev": { 00:05:14.150 "mask": "0x8", 00:05:14.150 "tpoint_mask": "0xffffffffffffffff" 00:05:14.150 }, 00:05:14.150 "nvmf_rdma": { 00:05:14.150 "mask": "0x10", 00:05:14.150 "tpoint_mask": "0x0" 00:05:14.150 }, 00:05:14.150 "nvmf_tcp": { 00:05:14.150 "mask": "0x20", 00:05:14.150 "tpoint_mask": "0x0" 00:05:14.150 }, 00:05:14.150 "ftl": { 00:05:14.150 "mask": "0x40", 00:05:14.150 "tpoint_mask": "0x0" 00:05:14.150 }, 00:05:14.150 "blobfs": { 00:05:14.150 "mask": "0x80", 00:05:14.150 "tpoint_mask": "0x0" 00:05:14.150 }, 00:05:14.150 "dsa": { 00:05:14.150 "mask": "0x200", 00:05:14.150 "tpoint_mask": "0x0" 00:05:14.150 }, 00:05:14.150 "thread": { 00:05:14.150 "mask": "0x400", 00:05:14.150 "tpoint_mask": "0x0" 00:05:14.150 }, 00:05:14.150 "nvme_pcie": { 00:05:14.150 "mask": "0x800", 00:05:14.151 "tpoint_mask": "0x0" 00:05:14.151 }, 00:05:14.151 "iaa": { 00:05:14.151 "mask": "0x1000", 00:05:14.151 "tpoint_mask": "0x0" 00:05:14.151 }, 00:05:14.151 "nvme_tcp": { 00:05:14.151 "mask": "0x2000", 00:05:14.151 "tpoint_mask": "0x0" 00:05:14.151 }, 00:05:14.151 "bdev_nvme": { 00:05:14.151 "mask": "0x4000", 00:05:14.151 "tpoint_mask": "0x0" 00:05:14.151 }, 00:05:14.151 "sock": { 00:05:14.151 "mask": "0x8000", 00:05:14.151 "tpoint_mask": "0x0" 00:05:14.151 } 00:05:14.151 }' 00:05:14.151 00:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:14.151 00:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:14.151 00:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:14.151 00:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:14.151 00:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:14.151 00:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:14.151 00:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:14.151 00:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:14.151 00:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:14.151 00:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:14.151 00:05:14.151 real 0m0.214s 00:05:14.151 user 0m0.179s 00:05:14.151 sys 0m0.027s 00:05:14.151 00:30:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.151 00:30:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:14.151 ************************************ 00:05:14.151 END TEST rpc_trace_cmd_test 00:05:14.151 ************************************ 00:05:14.151 00:30:25 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:14.151 00:30:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:14.151 00:30:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:14.151 00:30:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:14.151 00:30:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.151 00:30:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.151 00:30:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.410 ************************************ 00:05:14.410 START TEST rpc_daemon_integrity 00:05:14.410 ************************************ 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.410 { 00:05:14.410 "name": "Malloc2", 00:05:14.410 "aliases": [ 00:05:14.410 "0e289c2e-0a30-45c4-a08e-c4081e0e8f0b" 00:05:14.410 ], 00:05:14.410 "product_name": "Malloc disk", 00:05:14.410 "block_size": 512, 00:05:14.410 "num_blocks": 16384, 00:05:14.410 "uuid": "0e289c2e-0a30-45c4-a08e-c4081e0e8f0b", 00:05:14.410 "assigned_rate_limits": { 00:05:14.410 "rw_ios_per_sec": 0, 00:05:14.410 "rw_mbytes_per_sec": 0, 00:05:14.410 "r_mbytes_per_sec": 0, 00:05:14.410 "w_mbytes_per_sec": 0 00:05:14.410 }, 00:05:14.410 "claimed": false, 00:05:14.410 "zoned": false, 00:05:14.410 "supported_io_types": { 00:05:14.410 "read": true, 00:05:14.410 "write": true, 00:05:14.410 "unmap": true, 00:05:14.410 "flush": true, 00:05:14.410 "reset": true, 00:05:14.410 "nvme_admin": false, 00:05:14.410 "nvme_io": false, 00:05:14.410 "nvme_io_md": false, 00:05:14.410 "write_zeroes": true, 00:05:14.410 "zcopy": true, 00:05:14.410 "get_zone_info": false, 00:05:14.410 "zone_management": false, 00:05:14.410 "zone_append": false, 00:05:14.410 "compare": false, 00:05:14.410 "compare_and_write": false, 00:05:14.410 "abort": true, 00:05:14.410 "seek_hole": false, 00:05:14.410 "seek_data": false, 00:05:14.410 "copy": true, 00:05:14.410 "nvme_iov_md": false 00:05:14.410 }, 00:05:14.410 "memory_domains": [ 00:05:14.410 { 00:05:14.410 "dma_device_id": "system", 00:05:14.410 "dma_device_type": 1 00:05:14.410 }, 00:05:14.410 { 00:05:14.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.410 "dma_device_type": 2 00:05:14.410 } 00:05:14.410 ], 00:05:14.410 "driver_specific": {} 00:05:14.410 } 00:05:14.410 ]' 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.410 [2024-07-13 00:30:25.876901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:14.410 [2024-07-13 00:30:25.876930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.410 [2024-07-13 00:30:25.876942] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1674470 00:05:14.410 [2024-07-13 00:30:25.876948] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.410 [2024-07-13 00:30:25.877899] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.410 [2024-07-13 00:30:25.877918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.410 Passthru0 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.410 { 00:05:14.410 "name": "Malloc2", 00:05:14.410 "aliases": [ 00:05:14.410 "0e289c2e-0a30-45c4-a08e-c4081e0e8f0b" 00:05:14.410 ], 00:05:14.410 "product_name": "Malloc disk", 00:05:14.410 "block_size": 512, 00:05:14.410 "num_blocks": 16384, 00:05:14.410 "uuid": "0e289c2e-0a30-45c4-a08e-c4081e0e8f0b", 00:05:14.410 "assigned_rate_limits": { 00:05:14.410 "rw_ios_per_sec": 0, 00:05:14.410 "rw_mbytes_per_sec": 0, 00:05:14.410 "r_mbytes_per_sec": 0, 00:05:14.410 "w_mbytes_per_sec": 0 00:05:14.410 }, 00:05:14.410 "claimed": true, 00:05:14.410 "claim_type": "exclusive_write", 00:05:14.410 "zoned": false, 00:05:14.410 "supported_io_types": { 00:05:14.410 "read": true, 00:05:14.410 "write": true, 00:05:14.410 "unmap": true, 00:05:14.410 "flush": true, 00:05:14.410 "reset": true, 00:05:14.410 "nvme_admin": false, 00:05:14.410 "nvme_io": false, 00:05:14.410 "nvme_io_md": false, 00:05:14.410 "write_zeroes": true, 00:05:14.410 "zcopy": true, 00:05:14.410 "get_zone_info": false, 00:05:14.410 "zone_management": false, 00:05:14.410 "zone_append": false, 00:05:14.410 "compare": false, 00:05:14.410 "compare_and_write": false, 00:05:14.410 "abort": true, 00:05:14.410 "seek_hole": false, 00:05:14.410 "seek_data": false, 00:05:14.410 "copy": true, 00:05:14.410 "nvme_iov_md": false 00:05:14.410 }, 00:05:14.410 "memory_domains": [ 00:05:14.410 { 00:05:14.410 "dma_device_id": "system", 00:05:14.410 "dma_device_type": 1 00:05:14.410 }, 00:05:14.410 { 00:05:14.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.410 "dma_device_type": 2 00:05:14.410 } 00:05:14.410 ], 00:05:14.410 "driver_specific": {} 00:05:14.410 }, 00:05:14.410 { 00:05:14.410 "name": "Passthru0", 00:05:14.410 "aliases": [ 00:05:14.410 "2f87d466-7434-5efa-9ff1-a037938ed162" 00:05:14.410 ], 00:05:14.410 "product_name": "passthru", 00:05:14.410 "block_size": 512, 00:05:14.410 "num_blocks": 16384, 00:05:14.410 "uuid": "2f87d466-7434-5efa-9ff1-a037938ed162", 00:05:14.410 "assigned_rate_limits": { 00:05:14.410 "rw_ios_per_sec": 0, 00:05:14.410 "rw_mbytes_per_sec": 0, 00:05:14.410 "r_mbytes_per_sec": 0, 00:05:14.410 "w_mbytes_per_sec": 0 00:05:14.410 }, 00:05:14.410 "claimed": false, 00:05:14.410 "zoned": false, 00:05:14.410 "supported_io_types": { 00:05:14.410 "read": true, 00:05:14.410 "write": true, 00:05:14.410 "unmap": true, 00:05:14.410 "flush": true, 00:05:14.410 "reset": true, 00:05:14.410 "nvme_admin": false, 00:05:14.410 "nvme_io": false, 00:05:14.410 "nvme_io_md": false, 00:05:14.410 "write_zeroes": true, 00:05:14.410 "zcopy": true, 00:05:14.410 "get_zone_info": false, 00:05:14.410 "zone_management": false, 00:05:14.410 "zone_append": false, 00:05:14.410 "compare": false, 00:05:14.410 "compare_and_write": false, 00:05:14.410 "abort": true, 00:05:14.410 "seek_hole": false, 00:05:14.410 "seek_data": false, 00:05:14.410 "copy": true, 00:05:14.410 "nvme_iov_md": false 00:05:14.410 }, 00:05:14.410 "memory_domains": [ 00:05:14.410 { 00:05:14.410 "dma_device_id": "system", 00:05:14.410 "dma_device_type": 1 00:05:14.410 }, 00:05:14.410 { 00:05:14.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.410 "dma_device_type": 2 00:05:14.410 } 00:05:14.410 ], 00:05:14.410 "driver_specific": { 00:05:14.410 "passthru": { 00:05:14.410 "name": "Passthru0", 00:05:14.410 "base_bdev_name": "Malloc2" 00:05:14.410 } 00:05:14.410 } 00:05:14.410 } 00:05:14.410 ]' 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.410 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.669 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.669 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.669 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.669 00:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.669 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.669 00:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:14.669 00:30:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.669 00:05:14.669 real 0m0.280s 00:05:14.669 user 0m0.180s 00:05:14.669 sys 0m0.039s 00:05:14.669 00:30:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.669 00:30:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.669 ************************************ 00:05:14.669 END TEST rpc_daemon_integrity 00:05:14.669 ************************************ 00:05:14.669 00:30:26 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:14.669 00:30:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:14.669 00:30:26 rpc -- rpc/rpc.sh@84 -- # killprocess 1195875 00:05:14.669 00:30:26 rpc -- common/autotest_common.sh@948 -- # '[' -z 1195875 ']' 00:05:14.669 00:30:26 rpc -- common/autotest_common.sh@952 -- # kill -0 1195875 00:05:14.669 00:30:26 rpc -- common/autotest_common.sh@953 -- # uname 00:05:14.669 00:30:26 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.669 00:30:26 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1195875 00:05:14.669 00:30:26 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.669 00:30:26 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.669 00:30:26 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1195875' 00:05:14.669 killing process with pid 1195875 00:05:14.669 00:30:26 rpc -- common/autotest_common.sh@967 -- # kill 1195875 00:05:14.669 00:30:26 rpc -- common/autotest_common.sh@972 -- # wait 1195875 00:05:14.927 00:05:14.927 real 0m2.455s 00:05:14.927 user 0m3.186s 00:05:14.927 sys 0m0.670s 00:05:14.927 00:30:26 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.927 00:30:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.927 ************************************ 00:05:14.927 END TEST rpc 00:05:14.927 ************************************ 00:05:14.927 00:30:26 -- common/autotest_common.sh@1142 -- # return 0 00:05:14.927 00:30:26 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:14.927 00:30:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.927 00:30:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.927 00:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.927 ************************************ 00:05:14.927 START TEST skip_rpc 00:05:14.927 ************************************ 00:05:14.927 00:30:26 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:15.185 * Looking for test storage... 00:05:15.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:15.185 00:30:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:15.185 00:30:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:15.185 00:30:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:15.185 00:30:26 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.185 00:30:26 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.185 00:30:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.185 ************************************ 00:05:15.185 START TEST skip_rpc 00:05:15.185 ************************************ 00:05:15.185 00:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:15.185 00:30:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1196510 00:05:15.185 00:30:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:15.185 00:30:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.185 00:30:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:15.185 [2024-07-13 00:30:26.639421] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:15.185 [2024-07-13 00:30:26.639457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196510 ] 00:05:15.185 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.185 [2024-07-13 00:30:26.691422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.185 [2024-07-13 00:30:26.731771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1196510 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1196510 ']' 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1196510 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1196510 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1196510' 00:05:20.530 killing process with pid 1196510 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1196510 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1196510 00:05:20.530 00:05:20.530 real 0m5.364s 00:05:20.530 user 0m5.148s 00:05:20.530 sys 0m0.248s 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.530 00:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.530 ************************************ 00:05:20.530 END TEST skip_rpc 00:05:20.530 ************************************ 00:05:20.530 00:30:31 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:20.530 00:30:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:20.530 00:30:31 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.530 00:30:31 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.530 00:30:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.530 ************************************ 00:05:20.530 START TEST skip_rpc_with_json 00:05:20.530 ************************************ 00:05:20.530 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:20.530 00:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:20.530 00:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1197456 00:05:20.530 00:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.530 00:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.530 00:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1197456 00:05:20.530 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1197456 ']' 00:05:20.530 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.530 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.530 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.530 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.530 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.530 [2024-07-13 00:30:32.079026] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:20.530 [2024-07-13 00:30:32.079064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197456 ] 00:05:20.789 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.789 [2024-07-13 00:30:32.143928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.789 [2024-07-13 00:30:32.184910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.049 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.049 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:21.049 00:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:21.049 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.049 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.049 [2024-07-13 00:30:32.372237] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:21.049 request: 00:05:21.049 { 00:05:21.049 "trtype": "tcp", 00:05:21.049 "method": "nvmf_get_transports", 00:05:21.049 "req_id": 1 00:05:21.049 } 00:05:21.049 Got JSON-RPC error response 00:05:21.049 response: 00:05:21.049 { 00:05:21.049 "code": -19, 00:05:21.049 "message": "No such device" 00:05:21.049 } 00:05:21.049 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:21.049 00:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:21.049 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.049 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.049 [2024-07-13 00:30:32.380339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.049 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.049 00:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:21.049 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.049 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.049 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.049 00:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:21.049 { 00:05:21.049 "subsystems": [ 00:05:21.049 { 00:05:21.049 "subsystem": "vfio_user_target", 00:05:21.049 "config": null 00:05:21.049 }, 00:05:21.049 { 00:05:21.049 "subsystem": "keyring", 00:05:21.049 "config": [] 00:05:21.049 }, 00:05:21.049 { 00:05:21.049 "subsystem": "iobuf", 00:05:21.049 "config": [ 00:05:21.049 { 00:05:21.049 "method": "iobuf_set_options", 00:05:21.049 "params": { 00:05:21.049 "small_pool_count": 8192, 00:05:21.049 "large_pool_count": 1024, 00:05:21.049 "small_bufsize": 8192, 00:05:21.049 "large_bufsize": 135168 00:05:21.049 } 00:05:21.049 } 00:05:21.049 ] 00:05:21.049 }, 00:05:21.049 { 00:05:21.049 "subsystem": "sock", 00:05:21.049 "config": [ 00:05:21.049 { 00:05:21.049 "method": "sock_set_default_impl", 00:05:21.049 "params": { 00:05:21.049 "impl_name": "posix" 00:05:21.049 } 00:05:21.049 }, 00:05:21.049 { 00:05:21.049 "method": "sock_impl_set_options", 00:05:21.049 "params": { 00:05:21.049 "impl_name": "ssl", 00:05:21.049 "recv_buf_size": 4096, 00:05:21.049 "send_buf_size": 4096, 00:05:21.049 "enable_recv_pipe": true, 00:05:21.049 "enable_quickack": false, 00:05:21.049 "enable_placement_id": 0, 00:05:21.049 "enable_zerocopy_send_server": true, 00:05:21.049 "enable_zerocopy_send_client": false, 00:05:21.049 "zerocopy_threshold": 0, 00:05:21.049 "tls_version": 0, 00:05:21.049 "enable_ktls": false 00:05:21.049 } 00:05:21.049 }, 00:05:21.049 { 00:05:21.049 "method": "sock_impl_set_options", 00:05:21.049 "params": { 00:05:21.049 "impl_name": "posix", 00:05:21.049 "recv_buf_size": 2097152, 00:05:21.049 "send_buf_size": 2097152, 00:05:21.049 "enable_recv_pipe": true, 00:05:21.049 "enable_quickack": false, 00:05:21.049 "enable_placement_id": 0, 00:05:21.049 "enable_zerocopy_send_server": true, 00:05:21.049 "enable_zerocopy_send_client": false, 00:05:21.049 "zerocopy_threshold": 0, 00:05:21.049 "tls_version": 0, 00:05:21.049 "enable_ktls": false 00:05:21.049 } 00:05:21.049 } 00:05:21.049 ] 00:05:21.049 }, 00:05:21.049 { 00:05:21.049 "subsystem": "vmd", 00:05:21.049 "config": [] 00:05:21.049 }, 00:05:21.049 { 00:05:21.049 "subsystem": "accel", 00:05:21.049 "config": [ 00:05:21.049 { 00:05:21.049 "method": "accel_set_options", 00:05:21.049 "params": { 00:05:21.049 "small_cache_size": 128, 00:05:21.049 "large_cache_size": 16, 00:05:21.049 "task_count": 2048, 00:05:21.049 "sequence_count": 2048, 00:05:21.049 "buf_count": 2048 00:05:21.049 } 00:05:21.049 } 00:05:21.049 ] 00:05:21.049 }, 00:05:21.049 { 00:05:21.049 "subsystem": "bdev", 00:05:21.049 "config": [ 00:05:21.049 { 00:05:21.049 "method": "bdev_set_options", 00:05:21.049 "params": { 00:05:21.049 "bdev_io_pool_size": 65535, 00:05:21.049 "bdev_io_cache_size": 256, 00:05:21.049 "bdev_auto_examine": true, 00:05:21.049 "iobuf_small_cache_size": 128, 00:05:21.049 "iobuf_large_cache_size": 16 00:05:21.049 } 00:05:21.049 }, 00:05:21.049 { 00:05:21.049 "method": "bdev_raid_set_options", 00:05:21.049 "params": { 00:05:21.049 "process_window_size_kb": 1024 00:05:21.049 } 00:05:21.049 }, 00:05:21.049 { 00:05:21.049 "method": "bdev_iscsi_set_options", 00:05:21.049 "params": { 00:05:21.049 "timeout_sec": 30 00:05:21.049 } 00:05:21.049 }, 00:05:21.049 { 00:05:21.049 "method": "bdev_nvme_set_options", 00:05:21.049 "params": { 00:05:21.049 "action_on_timeout": "none", 00:05:21.049 "timeout_us": 0, 00:05:21.049 "timeout_admin_us": 0, 00:05:21.049 "keep_alive_timeout_ms": 10000, 00:05:21.049 "arbitration_burst": 0, 00:05:21.049 "low_priority_weight": 0, 00:05:21.049 "medium_priority_weight": 0, 00:05:21.049 "high_priority_weight": 0, 00:05:21.049 "nvme_adminq_poll_period_us": 10000, 00:05:21.049 "nvme_ioq_poll_period_us": 0, 00:05:21.049 "io_queue_requests": 0, 00:05:21.049 "delay_cmd_submit": true, 00:05:21.049 "transport_retry_count": 4, 00:05:21.049 "bdev_retry_count": 3, 00:05:21.049 "transport_ack_timeout": 0, 00:05:21.049 "ctrlr_loss_timeout_sec": 0, 00:05:21.049 "reconnect_delay_sec": 0, 00:05:21.049 "fast_io_fail_timeout_sec": 0, 00:05:21.049 "disable_auto_failback": false, 00:05:21.049 "generate_uuids": false, 00:05:21.049 "transport_tos": 0, 00:05:21.049 "nvme_error_stat": false, 00:05:21.049 "rdma_srq_size": 0, 00:05:21.049 "io_path_stat": false, 00:05:21.049 "allow_accel_sequence": false, 00:05:21.049 "rdma_max_cq_size": 0, 00:05:21.049 "rdma_cm_event_timeout_ms": 0, 00:05:21.049 "dhchap_digests": [ 00:05:21.049 "sha256", 00:05:21.049 "sha384", 00:05:21.049 "sha512" 00:05:21.049 ], 00:05:21.049 "dhchap_dhgroups": [ 00:05:21.049 "null", 00:05:21.049 "ffdhe2048", 00:05:21.049 "ffdhe3072", 00:05:21.049 "ffdhe4096", 00:05:21.049 "ffdhe6144", 00:05:21.049 "ffdhe8192" 00:05:21.050 ] 00:05:21.050 } 00:05:21.050 }, 00:05:21.050 { 00:05:21.050 "method": "bdev_nvme_set_hotplug", 00:05:21.050 "params": { 00:05:21.050 "period_us": 100000, 00:05:21.050 "enable": false 00:05:21.050 } 00:05:21.050 }, 00:05:21.050 { 00:05:21.050 "method": "bdev_wait_for_examine" 00:05:21.050 } 00:05:21.050 ] 00:05:21.050 }, 00:05:21.050 { 00:05:21.050 "subsystem": "scsi", 00:05:21.050 "config": null 00:05:21.050 }, 00:05:21.050 { 00:05:21.050 "subsystem": "scheduler", 00:05:21.050 "config": [ 00:05:21.050 { 00:05:21.050 "method": "framework_set_scheduler", 00:05:21.050 "params": { 00:05:21.050 "name": "static" 00:05:21.050 } 00:05:21.050 } 00:05:21.050 ] 00:05:21.050 }, 00:05:21.050 { 00:05:21.050 "subsystem": "vhost_scsi", 00:05:21.050 "config": [] 00:05:21.050 }, 00:05:21.050 { 00:05:21.050 "subsystem": "vhost_blk", 00:05:21.050 "config": [] 00:05:21.050 }, 00:05:21.050 { 00:05:21.050 "subsystem": "ublk", 00:05:21.050 "config": [] 00:05:21.050 }, 00:05:21.050 { 00:05:21.050 "subsystem": "nbd", 00:05:21.050 "config": [] 00:05:21.050 }, 00:05:21.050 { 00:05:21.050 "subsystem": "nvmf", 00:05:21.050 "config": [ 00:05:21.050 { 00:05:21.050 "method": "nvmf_set_config", 00:05:21.050 "params": { 00:05:21.050 "discovery_filter": "match_any", 00:05:21.050 "admin_cmd_passthru": { 00:05:21.050 "identify_ctrlr": false 00:05:21.050 } 00:05:21.050 } 00:05:21.050 }, 00:05:21.050 { 00:05:21.050 "method": "nvmf_set_max_subsystems", 00:05:21.050 "params": { 00:05:21.050 "max_subsystems": 1024 00:05:21.050 } 00:05:21.050 }, 00:05:21.050 { 00:05:21.050 "method": "nvmf_set_crdt", 00:05:21.050 "params": { 00:05:21.050 "crdt1": 0, 00:05:21.050 "crdt2": 0, 00:05:21.050 "crdt3": 0 00:05:21.050 } 00:05:21.050 }, 00:05:21.050 { 00:05:21.050 "method": "nvmf_create_transport", 00:05:21.050 "params": { 00:05:21.050 "trtype": "TCP", 00:05:21.050 "max_queue_depth": 128, 00:05:21.050 "max_io_qpairs_per_ctrlr": 127, 00:05:21.050 "in_capsule_data_size": 4096, 00:05:21.050 "max_io_size": 131072, 00:05:21.050 "io_unit_size": 131072, 00:05:21.050 "max_aq_depth": 128, 00:05:21.050 "num_shared_buffers": 511, 00:05:21.050 "buf_cache_size": 4294967295, 00:05:21.050 "dif_insert_or_strip": false, 00:05:21.050 "zcopy": false, 00:05:21.050 "c2h_success": true, 00:05:21.050 "sock_priority": 0, 00:05:21.050 "abort_timeout_sec": 1, 00:05:21.050 "ack_timeout": 0, 00:05:21.050 "data_wr_pool_size": 0 00:05:21.050 } 00:05:21.050 } 00:05:21.050 ] 00:05:21.050 }, 00:05:21.050 { 00:05:21.050 "subsystem": "iscsi", 00:05:21.050 "config": [ 00:05:21.050 { 00:05:21.050 "method": "iscsi_set_options", 00:05:21.050 "params": { 00:05:21.050 "node_base": "iqn.2016-06.io.spdk", 00:05:21.050 "max_sessions": 128, 00:05:21.050 "max_connections_per_session": 2, 00:05:21.050 "max_queue_depth": 64, 00:05:21.050 "default_time2wait": 2, 00:05:21.050 "default_time2retain": 20, 00:05:21.050 "first_burst_length": 8192, 00:05:21.050 "immediate_data": true, 00:05:21.050 "allow_duplicated_isid": false, 00:05:21.050 "error_recovery_level": 0, 00:05:21.050 "nop_timeout": 60, 00:05:21.050 "nop_in_interval": 30, 00:05:21.050 "disable_chap": false, 00:05:21.050 "require_chap": false, 00:05:21.050 "mutual_chap": false, 00:05:21.050 "chap_group": 0, 00:05:21.050 "max_large_datain_per_connection": 64, 00:05:21.050 "max_r2t_per_connection": 4, 00:05:21.050 "pdu_pool_size": 36864, 00:05:21.050 "immediate_data_pool_size": 16384, 00:05:21.050 "data_out_pool_size": 2048 00:05:21.050 } 00:05:21.050 } 00:05:21.050 ] 00:05:21.050 } 00:05:21.050 ] 00:05:21.050 } 00:05:21.050 00:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:21.050 00:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1197456 00:05:21.050 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1197456 ']' 00:05:21.050 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1197456 00:05:21.050 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:21.050 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.050 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1197456 00:05:21.050 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.050 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.050 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1197456' 00:05:21.050 killing process with pid 1197456 00:05:21.050 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1197456 00:05:21.050 00:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1197456 00:05:21.618 00:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1197686 00:05:21.618 00:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:21.618 00:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:26.892 00:30:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1197686 00:05:26.892 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1197686 ']' 00:05:26.892 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1197686 00:05:26.892 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:26.892 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.892 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1197686 00:05:26.892 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.892 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.892 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1197686' 00:05:26.892 killing process with pid 1197686 00:05:26.892 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1197686 00:05:26.892 00:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1197686 00:05:26.892 00:30:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:26.892 00:30:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:26.892 00:05:26.892 real 0m6.214s 00:05:26.892 user 0m5.887s 00:05:26.892 sys 0m0.584s 00:05:26.892 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.892 00:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.892 ************************************ 00:05:26.892 END TEST skip_rpc_with_json 00:05:26.892 ************************************ 00:05:26.892 00:30:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:26.892 00:30:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:26.892 00:30:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.892 00:30:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.892 00:30:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.892 ************************************ 00:05:26.892 START TEST skip_rpc_with_delay 00:05:26.892 ************************************ 00:05:26.892 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:26.892 00:30:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.892 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:26.892 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.892 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.892 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.892 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.893 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.893 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.893 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.893 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.893 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:26.893 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.893 [2024-07-13 00:30:38.366861] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:26.893 [2024-07-13 00:30:38.366926] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:26.893 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:26.893 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:26.893 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:26.893 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:26.893 00:05:26.893 real 0m0.068s 00:05:26.893 user 0m0.048s 00:05:26.893 sys 0m0.019s 00:05:26.893 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.893 00:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:26.893 ************************************ 00:05:26.893 END TEST skip_rpc_with_delay 00:05:26.893 ************************************ 00:05:26.893 00:30:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:26.893 00:30:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:26.893 00:30:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:26.893 00:30:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:26.893 00:30:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.893 00:30:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.893 00:30:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.152 ************************************ 00:05:27.152 START TEST exit_on_failed_rpc_init 00:05:27.152 ************************************ 00:05:27.152 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:27.152 00:30:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1198657 00:05:27.152 00:30:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1198657 00:05:27.152 00:30:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.152 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1198657 ']' 00:05:27.152 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.152 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.152 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.152 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.152 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.152 [2024-07-13 00:30:38.504894] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:27.152 [2024-07-13 00:30:38.504940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198657 ] 00:05:27.152 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.152 [2024-07-13 00:30:38.574813] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.152 [2024-07-13 00:30:38.616939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.410 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.410 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:27.410 00:30:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.410 00:30:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.410 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:27.410 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.410 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.410 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.410 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.410 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.410 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.410 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.410 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.410 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:27.410 00:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.410 [2024-07-13 00:30:38.864536] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:27.410 [2024-07-13 00:30:38.864580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198669 ] 00:05:27.410 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.410 [2024-07-13 00:30:38.930853] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.669 [2024-07-13 00:30:38.970709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.669 [2024-07-13 00:30:38.970787] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:27.669 [2024-07-13 00:30:38.970797] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:27.669 [2024-07-13 00:30:38.970803] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1198657 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1198657 ']' 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1198657 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1198657 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1198657' 00:05:27.669 killing process with pid 1198657 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1198657 00:05:27.669 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1198657 00:05:27.927 00:05:27.927 real 0m0.922s 00:05:27.927 user 0m0.966s 00:05:27.927 sys 0m0.392s 00:05:27.927 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.927 00:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.927 ************************************ 00:05:27.927 END TEST exit_on_failed_rpc_init 00:05:27.927 ************************************ 00:05:27.927 00:30:39 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:27.927 00:30:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:27.927 00:05:27.927 real 0m12.943s 00:05:27.927 user 0m12.186s 00:05:27.927 sys 0m1.508s 00:05:27.927 00:30:39 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.927 00:30:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.927 ************************************ 00:05:27.927 END TEST skip_rpc 00:05:27.927 ************************************ 00:05:27.927 00:30:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.927 00:30:39 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.927 00:30:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.927 00:30:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.927 00:30:39 -- common/autotest_common.sh@10 -- # set +x 00:05:27.927 ************************************ 00:05:27.927 START TEST rpc_client 00:05:27.927 ************************************ 00:05:27.927 00:30:39 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:28.186 * Looking for test storage... 00:05:28.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:28.186 00:30:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:28.186 OK 00:05:28.186 00:30:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:28.186 00:05:28.186 real 0m0.113s 00:05:28.186 user 0m0.054s 00:05:28.186 sys 0m0.067s 00:05:28.186 00:30:39 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.186 00:30:39 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:28.186 ************************************ 00:05:28.186 END TEST rpc_client 00:05:28.186 ************************************ 00:05:28.186 00:30:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.186 00:30:39 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:28.186 00:30:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.186 00:30:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.186 00:30:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.186 ************************************ 00:05:28.186 START TEST json_config 00:05:28.186 ************************************ 00:05:28.186 00:30:39 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:28.186 00:30:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:28.186 00:30:39 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.186 00:30:39 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.186 00:30:39 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.186 00:30:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.186 00:30:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.186 00:30:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.186 00:30:39 json_config -- paths/export.sh@5 -- # export PATH 00:05:28.186 00:30:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@47 -- # : 0 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:28.186 00:30:39 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:28.186 00:30:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:28.445 INFO: JSON configuration test init 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:28.445 00:30:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.445 00:30:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:28.445 00:30:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.445 00:30:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.445 00:30:39 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:28.446 00:30:39 json_config -- json_config/common.sh@9 -- # local app=target 00:05:28.446 00:30:39 json_config -- json_config/common.sh@10 -- # shift 00:05:28.446 00:30:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.446 00:30:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.446 00:30:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.446 00:30:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.446 00:30:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.446 00:30:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1199004 00:05:28.446 00:30:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.446 Waiting for target to run... 00:05:28.446 00:30:39 json_config -- json_config/common.sh@25 -- # waitforlisten 1199004 /var/tmp/spdk_tgt.sock 00:05:28.446 00:30:39 json_config -- common/autotest_common.sh@829 -- # '[' -z 1199004 ']' 00:05:28.446 00:30:39 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.446 00:30:39 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.446 00:30:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:28.446 00:30:39 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.446 00:30:39 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.446 00:30:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.446 [2024-07-13 00:30:39.809304] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:28.446 [2024-07-13 00:30:39.809357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199004 ] 00:05:28.446 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.704 [2024-07-13 00:30:40.258383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.963 [2024-07-13 00:30:40.291834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.221 00:30:40 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.221 00:30:40 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:29.221 00:30:40 json_config -- json_config/common.sh@26 -- # echo '' 00:05:29.221 00:05:29.221 00:30:40 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:29.221 00:30:40 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:29.221 00:30:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.221 00:30:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.221 00:30:40 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:29.221 00:30:40 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:29.221 00:30:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.221 00:30:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.221 00:30:40 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:29.221 00:30:40 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:29.221 00:30:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:32.565 00:30:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.565 00:30:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:32.565 00:30:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:32.565 00:30:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.565 00:30:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:32.565 00:30:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.565 00:30:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:32.565 00:30:43 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:32.565 00:30:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:32.828 MallocForNvmf0 00:05:32.828 00:30:44 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:32.828 00:30:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:32.828 MallocForNvmf1 00:05:32.828 00:30:44 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:32.828 00:30:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.087 [2024-07-13 00:30:44.457927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.087 00:30:44 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.087 00:30:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.087 00:30:44 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:33.087 00:30:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:33.346 00:30:44 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:33.346 00:30:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:33.606 00:30:44 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:33.606 00:30:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:33.606 [2024-07-13 00:30:45.131946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:33.606 00:30:45 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:33.606 00:30:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.606 00:30:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.865 00:30:45 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:33.865 00:30:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.865 00:30:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.865 00:30:45 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:33.865 00:30:45 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.866 00:30:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.866 MallocBdevForConfigChangeCheck 00:05:33.866 00:30:45 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:33.866 00:30:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.866 00:30:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.866 00:30:45 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:33.866 00:30:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.434 00:30:45 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:34.434 INFO: shutting down applications... 00:05:34.434 00:30:45 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:34.434 00:30:45 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:34.434 00:30:45 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:34.434 00:30:45 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:35.813 Calling clear_iscsi_subsystem 00:05:35.813 Calling clear_nvmf_subsystem 00:05:35.813 Calling clear_nbd_subsystem 00:05:35.813 Calling clear_ublk_subsystem 00:05:35.813 Calling clear_vhost_blk_subsystem 00:05:35.813 Calling clear_vhost_scsi_subsystem 00:05:35.813 Calling clear_bdev_subsystem 00:05:35.813 00:30:47 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:35.813 00:30:47 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:35.813 00:30:47 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:35.814 00:30:47 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.814 00:30:47 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:35.814 00:30:47 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:36.382 00:30:47 json_config -- json_config/json_config.sh@345 -- # break 00:05:36.382 00:30:47 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:36.382 00:30:47 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:36.382 00:30:47 json_config -- json_config/common.sh@31 -- # local app=target 00:05:36.382 00:30:47 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.382 00:30:47 json_config -- json_config/common.sh@35 -- # [[ -n 1199004 ]] 00:05:36.382 00:30:47 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1199004 00:05:36.382 00:30:47 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.382 00:30:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.382 00:30:47 json_config -- json_config/common.sh@41 -- # kill -0 1199004 00:05:36.382 00:30:47 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.642 00:30:48 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.642 00:30:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.642 00:30:48 json_config -- json_config/common.sh@41 -- # kill -0 1199004 00:05:36.642 00:30:48 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:36.642 00:30:48 json_config -- json_config/common.sh@43 -- # break 00:05:36.642 00:30:48 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:36.642 00:30:48 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:36.642 SPDK target shutdown done 00:05:36.642 00:30:48 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:36.642 INFO: relaunching applications... 00:05:36.642 00:30:48 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.642 00:30:48 json_config -- json_config/common.sh@9 -- # local app=target 00:05:36.642 00:30:48 json_config -- json_config/common.sh@10 -- # shift 00:05:36.642 00:30:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.642 00:30:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.642 00:30:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.642 00:30:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.642 00:30:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.642 00:30:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1200514 00:05:36.642 00:30:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.642 Waiting for target to run... 00:05:36.642 00:30:48 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:36.642 00:30:48 json_config -- json_config/common.sh@25 -- # waitforlisten 1200514 /var/tmp/spdk_tgt.sock 00:05:36.642 00:30:48 json_config -- common/autotest_common.sh@829 -- # '[' -z 1200514 ']' 00:05:36.642 00:30:48 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.642 00:30:48 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.642 00:30:48 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.642 00:30:48 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.642 00:30:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.902 [2024-07-13 00:30:48.203783] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:36.902 [2024-07-13 00:30:48.203868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200514 ] 00:05:36.902 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.161 [2024-07-13 00:30:48.653427] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.161 [2024-07-13 00:30:48.685290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.452 [2024-07-13 00:30:51.679197] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.452 [2024-07-13 00:30:51.711499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:41.021 00:30:52 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.021 00:30:52 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:41.021 00:30:52 json_config -- json_config/common.sh@26 -- # echo '' 00:05:41.021 00:05:41.021 00:30:52 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:41.021 00:30:52 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:41.021 INFO: Checking if target configuration is the same... 00:05:41.021 00:30:52 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.021 00:30:52 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:41.021 00:30:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.021 + '[' 2 -ne 2 ']' 00:05:41.021 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:41.021 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:41.021 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.021 +++ basename /dev/fd/62 00:05:41.021 ++ mktemp /tmp/62.XXX 00:05:41.021 + tmp_file_1=/tmp/62.2NB 00:05:41.021 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.021 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:41.021 + tmp_file_2=/tmp/spdk_tgt_config.json.AOR 00:05:41.021 + ret=0 00:05:41.021 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.280 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.280 + diff -u /tmp/62.2NB /tmp/spdk_tgt_config.json.AOR 00:05:41.280 + echo 'INFO: JSON config files are the same' 00:05:41.280 INFO: JSON config files are the same 00:05:41.280 + rm /tmp/62.2NB /tmp/spdk_tgt_config.json.AOR 00:05:41.280 + exit 0 00:05:41.280 00:30:52 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:41.280 00:30:52 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:41.280 INFO: changing configuration and checking if this can be detected... 00:05:41.280 00:30:52 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:41.280 00:30:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:41.538 00:30:52 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:41.538 00:30:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.538 00:30:52 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.538 + '[' 2 -ne 2 ']' 00:05:41.538 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:41.538 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:41.538 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.538 +++ basename /dev/fd/62 00:05:41.538 ++ mktemp /tmp/62.XXX 00:05:41.538 + tmp_file_1=/tmp/62.fHH 00:05:41.538 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.538 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:41.538 + tmp_file_2=/tmp/spdk_tgt_config.json.Pnu 00:05:41.538 + ret=0 00:05:41.538 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.796 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.796 + diff -u /tmp/62.fHH /tmp/spdk_tgt_config.json.Pnu 00:05:41.796 + ret=1 00:05:41.796 + echo '=== Start of file: /tmp/62.fHH ===' 00:05:41.796 + cat /tmp/62.fHH 00:05:41.796 + echo '=== End of file: /tmp/62.fHH ===' 00:05:41.796 + echo '' 00:05:41.796 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Pnu ===' 00:05:41.796 + cat /tmp/spdk_tgt_config.json.Pnu 00:05:41.796 + echo '=== End of file: /tmp/spdk_tgt_config.json.Pnu ===' 00:05:41.796 + echo '' 00:05:41.796 + rm /tmp/62.fHH /tmp/spdk_tgt_config.json.Pnu 00:05:41.796 + exit 1 00:05:41.796 00:30:53 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:41.796 INFO: configuration change detected. 00:05:41.796 00:30:53 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:41.796 00:30:53 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:41.796 00:30:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.796 00:30:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.796 00:30:53 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:41.796 00:30:53 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:41.796 00:30:53 json_config -- json_config/json_config.sh@317 -- # [[ -n 1200514 ]] 00:05:41.796 00:30:53 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:41.796 00:30:53 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:41.796 00:30:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.796 00:30:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.796 00:30:53 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:41.796 00:30:53 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:41.796 00:30:53 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:41.796 00:30:53 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:41.796 00:30:53 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:41.796 00:30:53 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:41.796 00:30:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.796 00:30:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.796 00:30:53 json_config -- json_config/json_config.sh@323 -- # killprocess 1200514 00:05:41.796 00:30:53 json_config -- common/autotest_common.sh@948 -- # '[' -z 1200514 ']' 00:05:41.796 00:30:53 json_config -- common/autotest_common.sh@952 -- # kill -0 1200514 00:05:41.797 00:30:53 json_config -- common/autotest_common.sh@953 -- # uname 00:05:41.797 00:30:53 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.797 00:30:53 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1200514 00:05:42.056 00:30:53 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.056 00:30:53 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.056 00:30:53 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1200514' 00:05:42.056 killing process with pid 1200514 00:05:42.056 00:30:53 json_config -- common/autotest_common.sh@967 -- # kill 1200514 00:05:42.056 00:30:53 json_config -- common/autotest_common.sh@972 -- # wait 1200514 00:05:43.436 00:30:54 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.436 00:30:54 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:43.436 00:30:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.436 00:30:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.436 00:30:54 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:43.436 00:30:54 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:43.436 INFO: Success 00:05:43.436 00:05:43.436 real 0m15.252s 00:05:43.436 user 0m15.917s 00:05:43.436 sys 0m2.075s 00:05:43.436 00:30:54 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.436 00:30:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.436 ************************************ 00:05:43.436 END TEST json_config 00:05:43.436 ************************************ 00:05:43.436 00:30:54 -- common/autotest_common.sh@1142 -- # return 0 00:05:43.436 00:30:54 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:43.436 00:30:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.436 00:30:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.436 00:30:54 -- common/autotest_common.sh@10 -- # set +x 00:05:43.436 ************************************ 00:05:43.436 START TEST json_config_extra_key 00:05:43.437 ************************************ 00:05:43.437 00:30:54 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:43.696 00:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.696 00:30:55 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.696 00:30:55 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.696 00:30:55 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.696 00:30:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.696 00:30:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.696 00:30:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.696 00:30:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:43.696 00:30:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:43.696 00:30:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.697 00:30:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.697 00:30:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.697 00:30:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:43.697 00:30:55 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:43.697 00:30:55 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:43.697 00:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:43.697 00:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:43.697 00:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:43.697 00:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:43.697 00:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:43.697 00:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:43.697 00:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:43.697 00:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:43.697 00:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:43.697 00:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.697 00:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:43.697 INFO: launching applications... 00:05:43.697 00:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.697 00:30:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:43.697 00:30:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:43.697 00:30:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:43.697 00:30:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:43.697 00:30:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:43.697 00:30:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.697 00:30:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.697 00:30:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1201782 00:05:43.697 00:30:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:43.697 Waiting for target to run... 00:05:43.697 00:30:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1201782 /var/tmp/spdk_tgt.sock 00:05:43.697 00:30:55 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1201782 ']' 00:05:43.697 00:30:55 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.697 00:30:55 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.697 00:30:55 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.697 00:30:55 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.697 00:30:55 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.697 00:30:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.697 [2024-07-13 00:30:55.125547] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:43.697 [2024-07-13 00:30:55.125595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201782 ] 00:05:43.697 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.266 [2024-07-13 00:30:55.574297] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.266 [2024-07-13 00:30:55.607987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.525 00:30:55 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.525 00:30:55 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:44.525 00:30:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:44.525 00:05:44.525 00:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:44.525 INFO: shutting down applications... 00:05:44.525 00:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:44.525 00:30:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:44.525 00:30:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:44.525 00:30:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1201782 ]] 00:05:44.525 00:30:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1201782 00:05:44.525 00:30:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:44.525 00:30:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.525 00:30:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1201782 00:05:44.525 00:30:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.093 00:30:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.093 00:30:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.093 00:30:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1201782 00:05:45.093 00:30:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:45.093 00:30:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:45.093 00:30:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:45.093 00:30:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:45.093 SPDK target shutdown done 00:05:45.094 00:30:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:45.094 Success 00:05:45.094 00:05:45.094 real 0m1.455s 00:05:45.094 user 0m1.061s 00:05:45.094 sys 0m0.542s 00:05:45.094 00:30:56 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.094 00:30:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:45.094 ************************************ 00:05:45.094 END TEST json_config_extra_key 00:05:45.094 ************************************ 00:05:45.094 00:30:56 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.094 00:30:56 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.094 00:30:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.094 00:30:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.094 00:30:56 -- common/autotest_common.sh@10 -- # set +x 00:05:45.094 ************************************ 00:05:45.094 START TEST alias_rpc 00:05:45.094 ************************************ 00:05:45.094 00:30:56 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.094 * Looking for test storage... 00:05:45.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:45.094 00:30:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:45.094 00:30:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1202063 00:05:45.094 00:30:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1202063 00:05:45.094 00:30:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.094 00:30:56 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1202063 ']' 00:05:45.094 00:30:56 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.094 00:30:56 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.094 00:30:56 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.094 00:30:56 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.094 00:30:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.094 [2024-07-13 00:30:56.642844] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:45.094 [2024-07-13 00:30:56.642890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202063 ] 00:05:45.353 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.353 [2024-07-13 00:30:56.709737] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.353 [2024-07-13 00:30:56.750177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.921 00:30:57 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.921 00:30:57 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:45.921 00:30:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:46.180 00:30:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1202063 00:05:46.180 00:30:57 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1202063 ']' 00:05:46.180 00:30:57 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1202063 00:05:46.180 00:30:57 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:46.180 00:30:57 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.180 00:30:57 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1202063 00:05:46.180 00:30:57 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.180 00:30:57 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.180 00:30:57 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1202063' 00:05:46.180 killing process with pid 1202063 00:05:46.180 00:30:57 alias_rpc -- common/autotest_common.sh@967 -- # kill 1202063 00:05:46.180 00:30:57 alias_rpc -- common/autotest_common.sh@972 -- # wait 1202063 00:05:46.748 00:05:46.748 real 0m1.497s 00:05:46.748 user 0m1.631s 00:05:46.748 sys 0m0.424s 00:05:46.748 00:30:58 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.748 00:30:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.748 ************************************ 00:05:46.748 END TEST alias_rpc 00:05:46.748 ************************************ 00:05:46.748 00:30:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:46.748 00:30:58 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:46.748 00:30:58 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.748 00:30:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.748 00:30:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.748 00:30:58 -- common/autotest_common.sh@10 -- # set +x 00:05:46.748 ************************************ 00:05:46.748 START TEST spdkcli_tcp 00:05:46.748 ************************************ 00:05:46.748 00:30:58 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.748 * Looking for test storage... 00:05:46.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:46.748 00:30:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:46.748 00:30:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:46.749 00:30:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:46.749 00:30:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:46.749 00:30:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:46.749 00:30:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:46.749 00:30:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:46.749 00:30:58 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.749 00:30:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.749 00:30:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1202350 00:05:46.749 00:30:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1202350 00:05:46.749 00:30:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:46.749 00:30:58 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1202350 ']' 00:05:46.749 00:30:58 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.749 00:30:58 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.749 00:30:58 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.749 00:30:58 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.749 00:30:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.749 [2024-07-13 00:30:58.211672] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:46.749 [2024-07-13 00:30:58.211721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202350 ] 00:05:46.749 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.749 [2024-07-13 00:30:58.281278] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.008 [2024-07-13 00:30:58.322589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.008 [2024-07-13 00:30:58.322591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.576 00:30:59 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.576 00:30:59 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:47.576 00:30:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1202556 00:05:47.576 00:30:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:47.576 00:30:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:47.836 [ 00:05:47.836 "bdev_malloc_delete", 00:05:47.836 "bdev_malloc_create", 00:05:47.836 "bdev_null_resize", 00:05:47.836 "bdev_null_delete", 00:05:47.836 "bdev_null_create", 00:05:47.836 "bdev_nvme_cuse_unregister", 00:05:47.836 "bdev_nvme_cuse_register", 00:05:47.836 "bdev_opal_new_user", 00:05:47.836 "bdev_opal_set_lock_state", 00:05:47.836 "bdev_opal_delete", 00:05:47.836 "bdev_opal_get_info", 00:05:47.836 "bdev_opal_create", 00:05:47.836 "bdev_nvme_opal_revert", 00:05:47.836 "bdev_nvme_opal_init", 00:05:47.836 "bdev_nvme_send_cmd", 00:05:47.836 "bdev_nvme_get_path_iostat", 00:05:47.836 "bdev_nvme_get_mdns_discovery_info", 00:05:47.836 "bdev_nvme_stop_mdns_discovery", 00:05:47.836 "bdev_nvme_start_mdns_discovery", 00:05:47.836 "bdev_nvme_set_multipath_policy", 00:05:47.836 "bdev_nvme_set_preferred_path", 00:05:47.836 "bdev_nvme_get_io_paths", 00:05:47.836 "bdev_nvme_remove_error_injection", 00:05:47.836 "bdev_nvme_add_error_injection", 00:05:47.836 "bdev_nvme_get_discovery_info", 00:05:47.836 "bdev_nvme_stop_discovery", 00:05:47.836 "bdev_nvme_start_discovery", 00:05:47.836 "bdev_nvme_get_controller_health_info", 00:05:47.836 "bdev_nvme_disable_controller", 00:05:47.836 "bdev_nvme_enable_controller", 00:05:47.836 "bdev_nvme_reset_controller", 00:05:47.836 "bdev_nvme_get_transport_statistics", 00:05:47.836 "bdev_nvme_apply_firmware", 00:05:47.836 "bdev_nvme_detach_controller", 00:05:47.836 "bdev_nvme_get_controllers", 00:05:47.836 "bdev_nvme_attach_controller", 00:05:47.836 "bdev_nvme_set_hotplug", 00:05:47.836 "bdev_nvme_set_options", 00:05:47.836 "bdev_passthru_delete", 00:05:47.836 "bdev_passthru_create", 00:05:47.836 "bdev_lvol_set_parent_bdev", 00:05:47.836 "bdev_lvol_set_parent", 00:05:47.836 "bdev_lvol_check_shallow_copy", 00:05:47.836 "bdev_lvol_start_shallow_copy", 00:05:47.836 "bdev_lvol_grow_lvstore", 00:05:47.836 "bdev_lvol_get_lvols", 00:05:47.836 "bdev_lvol_get_lvstores", 00:05:47.836 "bdev_lvol_delete", 00:05:47.836 "bdev_lvol_set_read_only", 00:05:47.836 "bdev_lvol_resize", 00:05:47.836 "bdev_lvol_decouple_parent", 00:05:47.836 "bdev_lvol_inflate", 00:05:47.836 "bdev_lvol_rename", 00:05:47.836 "bdev_lvol_clone_bdev", 00:05:47.836 "bdev_lvol_clone", 00:05:47.836 "bdev_lvol_snapshot", 00:05:47.836 "bdev_lvol_create", 00:05:47.836 "bdev_lvol_delete_lvstore", 00:05:47.836 "bdev_lvol_rename_lvstore", 00:05:47.836 "bdev_lvol_create_lvstore", 00:05:47.836 "bdev_raid_set_options", 00:05:47.836 "bdev_raid_remove_base_bdev", 00:05:47.836 "bdev_raid_add_base_bdev", 00:05:47.836 "bdev_raid_delete", 00:05:47.836 "bdev_raid_create", 00:05:47.836 "bdev_raid_get_bdevs", 00:05:47.836 "bdev_error_inject_error", 00:05:47.836 "bdev_error_delete", 00:05:47.836 "bdev_error_create", 00:05:47.836 "bdev_split_delete", 00:05:47.836 "bdev_split_create", 00:05:47.836 "bdev_delay_delete", 00:05:47.836 "bdev_delay_create", 00:05:47.836 "bdev_delay_update_latency", 00:05:47.836 "bdev_zone_block_delete", 00:05:47.836 "bdev_zone_block_create", 00:05:47.836 "blobfs_create", 00:05:47.836 "blobfs_detect", 00:05:47.836 "blobfs_set_cache_size", 00:05:47.836 "bdev_aio_delete", 00:05:47.836 "bdev_aio_rescan", 00:05:47.836 "bdev_aio_create", 00:05:47.836 "bdev_ftl_set_property", 00:05:47.836 "bdev_ftl_get_properties", 00:05:47.836 "bdev_ftl_get_stats", 00:05:47.836 "bdev_ftl_unmap", 00:05:47.836 "bdev_ftl_unload", 00:05:47.836 "bdev_ftl_delete", 00:05:47.836 "bdev_ftl_load", 00:05:47.836 "bdev_ftl_create", 00:05:47.836 "bdev_virtio_attach_controller", 00:05:47.836 "bdev_virtio_scsi_get_devices", 00:05:47.836 "bdev_virtio_detach_controller", 00:05:47.836 "bdev_virtio_blk_set_hotplug", 00:05:47.836 "bdev_iscsi_delete", 00:05:47.836 "bdev_iscsi_create", 00:05:47.836 "bdev_iscsi_set_options", 00:05:47.836 "accel_error_inject_error", 00:05:47.836 "ioat_scan_accel_module", 00:05:47.836 "dsa_scan_accel_module", 00:05:47.836 "iaa_scan_accel_module", 00:05:47.836 "vfu_virtio_create_scsi_endpoint", 00:05:47.836 "vfu_virtio_scsi_remove_target", 00:05:47.836 "vfu_virtio_scsi_add_target", 00:05:47.836 "vfu_virtio_create_blk_endpoint", 00:05:47.836 "vfu_virtio_delete_endpoint", 00:05:47.836 "keyring_file_remove_key", 00:05:47.836 "keyring_file_add_key", 00:05:47.836 "keyring_linux_set_options", 00:05:47.836 "iscsi_get_histogram", 00:05:47.836 "iscsi_enable_histogram", 00:05:47.836 "iscsi_set_options", 00:05:47.836 "iscsi_get_auth_groups", 00:05:47.836 "iscsi_auth_group_remove_secret", 00:05:47.836 "iscsi_auth_group_add_secret", 00:05:47.836 "iscsi_delete_auth_group", 00:05:47.836 "iscsi_create_auth_group", 00:05:47.836 "iscsi_set_discovery_auth", 00:05:47.836 "iscsi_get_options", 00:05:47.836 "iscsi_target_node_request_logout", 00:05:47.836 "iscsi_target_node_set_redirect", 00:05:47.836 "iscsi_target_node_set_auth", 00:05:47.836 "iscsi_target_node_add_lun", 00:05:47.836 "iscsi_get_stats", 00:05:47.836 "iscsi_get_connections", 00:05:47.836 "iscsi_portal_group_set_auth", 00:05:47.836 "iscsi_start_portal_group", 00:05:47.836 "iscsi_delete_portal_group", 00:05:47.836 "iscsi_create_portal_group", 00:05:47.836 "iscsi_get_portal_groups", 00:05:47.836 "iscsi_delete_target_node", 00:05:47.836 "iscsi_target_node_remove_pg_ig_maps", 00:05:47.836 "iscsi_target_node_add_pg_ig_maps", 00:05:47.836 "iscsi_create_target_node", 00:05:47.836 "iscsi_get_target_nodes", 00:05:47.836 "iscsi_delete_initiator_group", 00:05:47.836 "iscsi_initiator_group_remove_initiators", 00:05:47.836 "iscsi_initiator_group_add_initiators", 00:05:47.836 "iscsi_create_initiator_group", 00:05:47.836 "iscsi_get_initiator_groups", 00:05:47.836 "nvmf_set_crdt", 00:05:47.836 "nvmf_set_config", 00:05:47.836 "nvmf_set_max_subsystems", 00:05:47.836 "nvmf_stop_mdns_prr", 00:05:47.836 "nvmf_publish_mdns_prr", 00:05:47.836 "nvmf_subsystem_get_listeners", 00:05:47.836 "nvmf_subsystem_get_qpairs", 00:05:47.836 "nvmf_subsystem_get_controllers", 00:05:47.836 "nvmf_get_stats", 00:05:47.836 "nvmf_get_transports", 00:05:47.836 "nvmf_create_transport", 00:05:47.836 "nvmf_get_targets", 00:05:47.836 "nvmf_delete_target", 00:05:47.836 "nvmf_create_target", 00:05:47.836 "nvmf_subsystem_allow_any_host", 00:05:47.836 "nvmf_subsystem_remove_host", 00:05:47.836 "nvmf_subsystem_add_host", 00:05:47.836 "nvmf_ns_remove_host", 00:05:47.836 "nvmf_ns_add_host", 00:05:47.836 "nvmf_subsystem_remove_ns", 00:05:47.836 "nvmf_subsystem_add_ns", 00:05:47.836 "nvmf_subsystem_listener_set_ana_state", 00:05:47.836 "nvmf_discovery_get_referrals", 00:05:47.836 "nvmf_discovery_remove_referral", 00:05:47.836 "nvmf_discovery_add_referral", 00:05:47.836 "nvmf_subsystem_remove_listener", 00:05:47.836 "nvmf_subsystem_add_listener", 00:05:47.836 "nvmf_delete_subsystem", 00:05:47.836 "nvmf_create_subsystem", 00:05:47.836 "nvmf_get_subsystems", 00:05:47.836 "env_dpdk_get_mem_stats", 00:05:47.836 "nbd_get_disks", 00:05:47.836 "nbd_stop_disk", 00:05:47.836 "nbd_start_disk", 00:05:47.836 "ublk_recover_disk", 00:05:47.836 "ublk_get_disks", 00:05:47.836 "ublk_stop_disk", 00:05:47.836 "ublk_start_disk", 00:05:47.836 "ublk_destroy_target", 00:05:47.836 "ublk_create_target", 00:05:47.836 "virtio_blk_create_transport", 00:05:47.836 "virtio_blk_get_transports", 00:05:47.836 "vhost_controller_set_coalescing", 00:05:47.836 "vhost_get_controllers", 00:05:47.836 "vhost_delete_controller", 00:05:47.836 "vhost_create_blk_controller", 00:05:47.836 "vhost_scsi_controller_remove_target", 00:05:47.836 "vhost_scsi_controller_add_target", 00:05:47.836 "vhost_start_scsi_controller", 00:05:47.836 "vhost_create_scsi_controller", 00:05:47.836 "thread_set_cpumask", 00:05:47.836 "framework_get_governor", 00:05:47.836 "framework_get_scheduler", 00:05:47.836 "framework_set_scheduler", 00:05:47.836 "framework_get_reactors", 00:05:47.836 "thread_get_io_channels", 00:05:47.836 "thread_get_pollers", 00:05:47.836 "thread_get_stats", 00:05:47.836 "framework_monitor_context_switch", 00:05:47.836 "spdk_kill_instance", 00:05:47.836 "log_enable_timestamps", 00:05:47.836 "log_get_flags", 00:05:47.836 "log_clear_flag", 00:05:47.836 "log_set_flag", 00:05:47.836 "log_get_level", 00:05:47.836 "log_set_level", 00:05:47.837 "log_get_print_level", 00:05:47.837 "log_set_print_level", 00:05:47.837 "framework_enable_cpumask_locks", 00:05:47.837 "framework_disable_cpumask_locks", 00:05:47.837 "framework_wait_init", 00:05:47.837 "framework_start_init", 00:05:47.837 "scsi_get_devices", 00:05:47.837 "bdev_get_histogram", 00:05:47.837 "bdev_enable_histogram", 00:05:47.837 "bdev_set_qos_limit", 00:05:47.837 "bdev_set_qd_sampling_period", 00:05:47.837 "bdev_get_bdevs", 00:05:47.837 "bdev_reset_iostat", 00:05:47.837 "bdev_get_iostat", 00:05:47.837 "bdev_examine", 00:05:47.837 "bdev_wait_for_examine", 00:05:47.837 "bdev_set_options", 00:05:47.837 "notify_get_notifications", 00:05:47.837 "notify_get_types", 00:05:47.837 "accel_get_stats", 00:05:47.837 "accel_set_options", 00:05:47.837 "accel_set_driver", 00:05:47.837 "accel_crypto_key_destroy", 00:05:47.837 "accel_crypto_keys_get", 00:05:47.837 "accel_crypto_key_create", 00:05:47.837 "accel_assign_opc", 00:05:47.837 "accel_get_module_info", 00:05:47.837 "accel_get_opc_assignments", 00:05:47.837 "vmd_rescan", 00:05:47.837 "vmd_remove_device", 00:05:47.837 "vmd_enable", 00:05:47.837 "sock_get_default_impl", 00:05:47.837 "sock_set_default_impl", 00:05:47.837 "sock_impl_set_options", 00:05:47.837 "sock_impl_get_options", 00:05:47.837 "iobuf_get_stats", 00:05:47.837 "iobuf_set_options", 00:05:47.837 "keyring_get_keys", 00:05:47.837 "framework_get_pci_devices", 00:05:47.837 "framework_get_config", 00:05:47.837 "framework_get_subsystems", 00:05:47.837 "vfu_tgt_set_base_path", 00:05:47.837 "trace_get_info", 00:05:47.837 "trace_get_tpoint_group_mask", 00:05:47.837 "trace_disable_tpoint_group", 00:05:47.837 "trace_enable_tpoint_group", 00:05:47.837 "trace_clear_tpoint_mask", 00:05:47.837 "trace_set_tpoint_mask", 00:05:47.837 "spdk_get_version", 00:05:47.837 "rpc_get_methods" 00:05:47.837 ] 00:05:47.837 00:30:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:47.837 00:30:59 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.837 00:30:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.837 00:30:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:47.837 00:30:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1202350 00:05:47.837 00:30:59 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1202350 ']' 00:05:47.837 00:30:59 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1202350 00:05:47.837 00:30:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:47.837 00:30:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.837 00:30:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1202350 00:05:47.837 00:30:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.837 00:30:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.837 00:30:59 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1202350' 00:05:47.837 killing process with pid 1202350 00:05:47.837 00:30:59 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1202350 00:05:47.837 00:30:59 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1202350 00:05:48.096 00:05:48.096 real 0m1.517s 00:05:48.096 user 0m2.839s 00:05:48.096 sys 0m0.452s 00:05:48.096 00:30:59 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.096 00:30:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:48.096 ************************************ 00:05:48.096 END TEST spdkcli_tcp 00:05:48.096 ************************************ 00:05:48.096 00:30:59 -- common/autotest_common.sh@1142 -- # return 0 00:05:48.096 00:30:59 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:48.096 00:30:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.096 00:30:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.096 00:30:59 -- common/autotest_common.sh@10 -- # set +x 00:05:48.096 ************************************ 00:05:48.096 START TEST dpdk_mem_utility 00:05:48.096 ************************************ 00:05:48.096 00:30:59 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:48.356 * Looking for test storage... 00:05:48.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:48.356 00:30:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:48.356 00:30:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1202656 00:05:48.356 00:30:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1202656 00:05:48.356 00:30:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.356 00:30:59 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1202656 ']' 00:05:48.356 00:30:59 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.356 00:30:59 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.356 00:30:59 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.356 00:30:59 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.356 00:30:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.356 [2024-07-13 00:30:59.791385] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:48.356 [2024-07-13 00:30:59.791429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202656 ] 00:05:48.356 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.356 [2024-07-13 00:30:59.859676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.356 [2024-07-13 00:30:59.899587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.293 00:31:00 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.293 00:31:00 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:49.293 00:31:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:49.293 00:31:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:49.293 00:31:00 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.293 00:31:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.293 { 00:05:49.293 "filename": "/tmp/spdk_mem_dump.txt" 00:05:49.293 } 00:05:49.293 00:31:00 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.293 00:31:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:49.293 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:49.293 1 heaps totaling size 814.000000 MiB 00:05:49.293 size: 814.000000 MiB heap id: 0 00:05:49.293 end heaps---------- 00:05:49.293 8 mempools totaling size 598.116089 MiB 00:05:49.293 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:49.293 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:49.293 size: 84.521057 MiB name: bdev_io_1202656 00:05:49.293 size: 51.011292 MiB name: evtpool_1202656 00:05:49.293 size: 50.003479 MiB name: msgpool_1202656 00:05:49.293 size: 21.763794 MiB name: PDU_Pool 00:05:49.293 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:49.293 size: 0.026123 MiB name: Session_Pool 00:05:49.293 end mempools------- 00:05:49.293 6 memzones totaling size 4.142822 MiB 00:05:49.293 size: 1.000366 MiB name: RG_ring_0_1202656 00:05:49.293 size: 1.000366 MiB name: RG_ring_1_1202656 00:05:49.293 size: 1.000366 MiB name: RG_ring_4_1202656 00:05:49.293 size: 1.000366 MiB name: RG_ring_5_1202656 00:05:49.293 size: 0.125366 MiB name: RG_ring_2_1202656 00:05:49.293 size: 0.015991 MiB name: RG_ring_3_1202656 00:05:49.293 end memzones------- 00:05:49.293 00:31:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:49.293 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:49.293 list of free elements. size: 12.519348 MiB 00:05:49.293 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:49.293 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:49.293 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:49.293 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:49.293 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:49.293 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:49.293 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:49.293 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:49.293 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:49.293 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:49.293 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:49.293 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:49.293 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:49.293 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:49.293 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:49.293 list of standard malloc elements. size: 199.218079 MiB 00:05:49.293 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:49.293 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:49.293 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:49.293 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:49.293 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:49.293 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:49.293 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:49.293 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:49.293 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:49.293 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:49.293 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:49.293 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:49.293 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:49.293 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:49.293 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:49.293 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:49.293 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:49.293 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:49.293 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:49.293 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:49.293 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:49.293 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:49.293 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:49.293 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:49.293 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:49.293 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:49.293 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:49.293 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:49.293 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:49.293 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:49.293 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:49.293 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:49.293 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:49.293 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:49.293 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:49.293 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:49.293 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:49.293 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:49.293 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:49.293 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:49.293 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:49.293 list of memzone associated elements. size: 602.262573 MiB 00:05:49.293 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:49.293 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:49.293 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:49.293 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:49.293 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:49.293 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1202656_0 00:05:49.293 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:49.293 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1202656_0 00:05:49.293 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:49.293 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1202656_0 00:05:49.293 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:49.294 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:49.294 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:49.294 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:49.294 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:49.294 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1202656 00:05:49.294 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:49.294 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1202656 00:05:49.294 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:49.294 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1202656 00:05:49.294 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:49.294 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:49.294 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:49.294 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:49.294 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:49.294 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:49.294 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:49.294 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:49.294 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:49.294 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1202656 00:05:49.294 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:49.294 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1202656 00:05:49.294 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:49.294 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1202656 00:05:49.294 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:49.294 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1202656 00:05:49.294 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:49.294 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1202656 00:05:49.294 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:49.294 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:49.294 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:49.294 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:49.294 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:49.294 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:49.294 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:49.294 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1202656 00:05:49.294 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:49.294 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:49.294 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:49.294 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:49.294 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:49.294 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1202656 00:05:49.294 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:49.294 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:49.294 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:49.294 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1202656 00:05:49.294 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:49.294 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1202656 00:05:49.294 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:49.294 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:49.294 00:31:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:49.294 00:31:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1202656 00:05:49.294 00:31:00 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1202656 ']' 00:05:49.294 00:31:00 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1202656 00:05:49.294 00:31:00 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:49.294 00:31:00 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.294 00:31:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1202656 00:05:49.294 00:31:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.294 00:31:00 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.294 00:31:00 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1202656' 00:05:49.294 killing process with pid 1202656 00:05:49.294 00:31:00 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1202656 00:05:49.294 00:31:00 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1202656 00:05:49.553 00:05:49.553 real 0m1.406s 00:05:49.553 user 0m1.485s 00:05:49.553 sys 0m0.404s 00:05:49.553 00:31:01 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.553 00:31:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.553 ************************************ 00:05:49.553 END TEST dpdk_mem_utility 00:05:49.553 ************************************ 00:05:49.553 00:31:01 -- common/autotest_common.sh@1142 -- # return 0 00:05:49.553 00:31:01 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.553 00:31:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.553 00:31:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.553 00:31:01 -- common/autotest_common.sh@10 -- # set +x 00:05:49.812 ************************************ 00:05:49.812 START TEST event 00:05:49.812 ************************************ 00:05:49.812 00:31:01 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.812 * Looking for test storage... 00:05:49.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:49.812 00:31:01 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:49.812 00:31:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:49.812 00:31:01 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.812 00:31:01 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:49.812 00:31:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.812 00:31:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.812 ************************************ 00:05:49.812 START TEST event_perf 00:05:49.812 ************************************ 00:05:49.812 00:31:01 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.812 Running I/O for 1 seconds...[2024-07-13 00:31:01.266607] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:49.812 [2024-07-13 00:31:01.266674] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202946 ] 00:05:49.812 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.812 [2024-07-13 00:31:01.338717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.071 [2024-07-13 00:31:01.381135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.071 [2024-07-13 00:31:01.381259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.071 [2024-07-13 00:31:01.381318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.071 [2024-07-13 00:31:01.381319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.007 Running I/O for 1 seconds... 00:05:51.007 lcore 0: 209076 00:05:51.007 lcore 1: 209076 00:05:51.007 lcore 2: 209075 00:05:51.007 lcore 3: 209075 00:05:51.007 done. 00:05:51.007 00:05:51.007 real 0m1.197s 00:05:51.007 user 0m4.103s 00:05:51.007 sys 0m0.091s 00:05:51.007 00:31:02 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.007 00:31:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.007 ************************************ 00:05:51.007 END TEST event_perf 00:05:51.007 ************************************ 00:05:51.007 00:31:02 event -- common/autotest_common.sh@1142 -- # return 0 00:05:51.007 00:31:02 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:51.007 00:31:02 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:51.007 00:31:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.007 00:31:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.007 ************************************ 00:05:51.007 START TEST event_reactor 00:05:51.007 ************************************ 00:05:51.007 00:31:02 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:51.007 [2024-07-13 00:31:02.534933] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:51.007 [2024-07-13 00:31:02.535016] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203197 ] 00:05:51.007 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.266 [2024-07-13 00:31:02.604001] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.266 [2024-07-13 00:31:02.643968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.215 test_start 00:05:52.215 oneshot 00:05:52.215 tick 100 00:05:52.215 tick 100 00:05:52.215 tick 250 00:05:52.215 tick 100 00:05:52.215 tick 100 00:05:52.215 tick 100 00:05:52.215 tick 250 00:05:52.215 tick 500 00:05:52.215 tick 100 00:05:52.215 tick 100 00:05:52.215 tick 250 00:05:52.215 tick 100 00:05:52.215 tick 100 00:05:52.215 test_end 00:05:52.215 00:05:52.215 real 0m1.189s 00:05:52.215 user 0m1.104s 00:05:52.215 sys 0m0.081s 00:05:52.215 00:31:03 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.215 00:31:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:52.215 ************************************ 00:05:52.215 END TEST event_reactor 00:05:52.215 ************************************ 00:05:52.215 00:31:03 event -- common/autotest_common.sh@1142 -- # return 0 00:05:52.215 00:31:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.215 00:31:03 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:52.215 00:31:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.215 00:31:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.215 ************************************ 00:05:52.215 START TEST event_reactor_perf 00:05:52.215 ************************************ 00:05:52.215 00:31:03 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.474 [2024-07-13 00:31:03.794642] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:52.474 [2024-07-13 00:31:03.794710] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203443 ] 00:05:52.474 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.474 [2024-07-13 00:31:03.865339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.474 [2024-07-13 00:31:03.904984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.410 test_start 00:05:53.410 test_end 00:05:53.410 Performance: 507808 events per second 00:05:53.410 00:05:53.410 real 0m1.190s 00:05:53.410 user 0m1.109s 00:05:53.410 sys 0m0.077s 00:05:53.410 00:31:04 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.410 00:31:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.410 ************************************ 00:05:53.410 END TEST event_reactor_perf 00:05:53.410 ************************************ 00:05:53.704 00:31:04 event -- common/autotest_common.sh@1142 -- # return 0 00:05:53.704 00:31:04 event -- event/event.sh@49 -- # uname -s 00:05:53.704 00:31:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:53.704 00:31:05 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.704 00:31:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.704 00:31:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.704 00:31:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.704 ************************************ 00:05:53.704 START TEST event_scheduler 00:05:53.704 ************************************ 00:05:53.704 00:31:05 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.704 * Looking for test storage... 00:05:53.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:53.704 00:31:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:53.704 00:31:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1203720 00:05:53.704 00:31:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.704 00:31:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:53.704 00:31:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1203720 00:05:53.704 00:31:05 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1203720 ']' 00:05:53.704 00:31:05 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.704 00:31:05 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.704 00:31:05 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.704 00:31:05 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.704 00:31:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.704 [2024-07-13 00:31:05.174666] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:53.704 [2024-07-13 00:31:05.174714] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203720 ] 00:05:53.704 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.704 [2024-07-13 00:31:05.243092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.963 [2024-07-13 00:31:05.286652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.963 [2024-07-13 00:31:05.286682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.963 [2024-07-13 00:31:05.286803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.963 [2024-07-13 00:31:05.286802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.963 00:31:05 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.963 00:31:05 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:53.963 00:31:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:53.963 00:31:05 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.963 00:31:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.963 [2024-07-13 00:31:05.343458] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:53.963 [2024-07-13 00:31:05.343475] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:53.963 [2024-07-13 00:31:05.343484] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:53.963 [2024-07-13 00:31:05.343490] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:53.963 [2024-07-13 00:31:05.343495] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:53.963 00:31:05 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.963 00:31:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:53.963 00:31:05 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.963 00:31:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.963 [2024-07-13 00:31:05.410353] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:53.963 00:31:05 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.963 00:31:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:53.963 00:31:05 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.963 00:31:05 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.963 00:31:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.963 ************************************ 00:05:53.963 START TEST scheduler_create_thread 00:05:53.963 ************************************ 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.963 2 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.963 3 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.963 4 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.963 5 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.963 6 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.963 7 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.963 8 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.963 9 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.963 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.223 10 00:05:54.223 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.223 00:31:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:54.223 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.223 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.223 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.223 00:31:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:54.223 00:31:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:54.223 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.223 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.223 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.223 00:31:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:54.223 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.223 00:31:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.601 00:31:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.601 00:31:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:55.601 00:31:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:55.601 00:31:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.601 00:31:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.537 00:31:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.537 00:05:56.537 real 0m2.621s 00:05:56.537 user 0m0.024s 00:05:56.537 sys 0m0.004s 00:05:56.537 00:31:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.537 00:31:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.537 ************************************ 00:05:56.537 END TEST scheduler_create_thread 00:05:56.537 ************************************ 00:05:56.797 00:31:08 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:56.797 00:31:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:56.797 00:31:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1203720 00:05:56.797 00:31:08 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1203720 ']' 00:05:56.797 00:31:08 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1203720 00:05:56.797 00:31:08 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:56.797 00:31:08 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.797 00:31:08 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1203720 00:05:56.797 00:31:08 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:56.797 00:31:08 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:56.797 00:31:08 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1203720' 00:05:56.797 killing process with pid 1203720 00:05:56.797 00:31:08 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1203720 00:05:56.797 00:31:08 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1203720 00:05:57.056 [2024-07-13 00:31:08.544397] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:57.316 00:05:57.316 real 0m3.688s 00:05:57.316 user 0m5.587s 00:05:57.316 sys 0m0.342s 00:05:57.316 00:31:08 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.316 00:31:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.316 ************************************ 00:05:57.316 END TEST event_scheduler 00:05:57.316 ************************************ 00:05:57.316 00:31:08 event -- common/autotest_common.sh@1142 -- # return 0 00:05:57.316 00:31:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:57.316 00:31:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:57.316 00:31:08 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.316 00:31:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.316 00:31:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.316 ************************************ 00:05:57.316 START TEST app_repeat 00:05:57.316 ************************************ 00:05:57.316 00:31:08 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:57.316 00:31:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.316 00:31:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.316 00:31:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:57.316 00:31:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.316 00:31:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:57.316 00:31:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:57.316 00:31:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:57.316 00:31:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1204457 00:05:57.316 00:31:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.316 00:31:08 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:57.316 00:31:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1204457' 00:05:57.316 Process app_repeat pid: 1204457 00:05:57.316 00:31:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.316 00:31:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:57.316 spdk_app_start Round 0 00:05:57.316 00:31:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1204457 /var/tmp/spdk-nbd.sock 00:05:57.316 00:31:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1204457 ']' 00:05:57.316 00:31:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.316 00:31:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.316 00:31:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.316 00:31:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.316 00:31:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.316 [2024-07-13 00:31:08.836755] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:57.316 [2024-07-13 00:31:08.836833] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204457 ] 00:05:57.316 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.575 [2024-07-13 00:31:08.904564] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.575 [2024-07-13 00:31:08.944329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.575 [2024-07-13 00:31:08.944330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.575 00:31:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.575 00:31:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:57.575 00:31:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.835 Malloc0 00:05:57.835 00:31:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.094 Malloc1 00:05:58.094 00:31:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.094 /dev/nbd0 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.094 00:31:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:58.094 00:31:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:58.094 00:31:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:58.094 00:31:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:58.094 00:31:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:58.094 00:31:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:58.094 00:31:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:58.094 00:31:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:58.094 00:31:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.094 1+0 records in 00:05:58.094 1+0 records out 00:05:58.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224011 s, 18.3 MB/s 00:05:58.094 00:31:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.094 00:31:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:58.094 00:31:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.094 00:31:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.094 00:31:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.094 00:31:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.353 /dev/nbd1 00:05:58.353 00:31:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.353 00:31:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.354 00:31:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:58.354 00:31:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:58.354 00:31:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:58.354 00:31:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:58.354 00:31:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:58.354 00:31:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:58.354 00:31:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:58.354 00:31:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:58.354 00:31:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.354 1+0 records in 00:05:58.354 1+0 records out 00:05:58.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000125776 s, 32.6 MB/s 00:05:58.354 00:31:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.354 00:31:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:58.354 00:31:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.354 00:31:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.354 00:31:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:58.354 00:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.354 00:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.354 00:31:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.354 00:31:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.354 00:31:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.613 { 00:05:58.613 "nbd_device": "/dev/nbd0", 00:05:58.613 "bdev_name": "Malloc0" 00:05:58.613 }, 00:05:58.613 { 00:05:58.613 "nbd_device": "/dev/nbd1", 00:05:58.613 "bdev_name": "Malloc1" 00:05:58.613 } 00:05:58.613 ]' 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.613 { 00:05:58.613 "nbd_device": "/dev/nbd0", 00:05:58.613 "bdev_name": "Malloc0" 00:05:58.613 }, 00:05:58.613 { 00:05:58.613 "nbd_device": "/dev/nbd1", 00:05:58.613 "bdev_name": "Malloc1" 00:05:58.613 } 00:05:58.613 ]' 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.613 /dev/nbd1' 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.613 /dev/nbd1' 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.613 256+0 records in 00:05:58.613 256+0 records out 00:05:58.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00951922 s, 110 MB/s 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.613 256+0 records in 00:05:58.613 256+0 records out 00:05:58.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143779 s, 72.9 MB/s 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.613 256+0 records in 00:05:58.613 256+0 records out 00:05:58.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145429 s, 72.1 MB/s 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.613 00:31:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.614 00:31:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.614 00:31:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.614 00:31:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.614 00:31:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.614 00:31:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.614 00:31:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.614 00:31:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.614 00:31:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.614 00:31:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.614 00:31:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.614 00:31:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.614 00:31:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.614 00:31:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.614 00:31:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.873 00:31:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.873 00:31:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.873 00:31:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.873 00:31:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.873 00:31:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.873 00:31:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.873 00:31:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.873 00:31:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.873 00:31:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.873 00:31:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.132 00:31:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.132 00:31:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.132 00:31:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.132 00:31:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.132 00:31:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.132 00:31:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.132 00:31:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.132 00:31:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.132 00:31:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.132 00:31:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.132 00:31:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.391 00:31:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.391 00:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.391 00:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.391 00:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.391 00:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.391 00:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.391 00:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.391 00:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.391 00:31:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.391 00:31:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.391 00:31:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.391 00:31:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.391 00:31:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.650 00:31:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:59.650 [2024-07-13 00:31:11.124944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.650 [2024-07-13 00:31:11.161648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.650 [2024-07-13 00:31:11.161651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.650 [2024-07-13 00:31:11.202490] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.650 [2024-07-13 00:31:11.202531] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.939 00:31:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.939 00:31:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:02.939 spdk_app_start Round 1 00:06:02.939 00:31:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1204457 /var/tmp/spdk-nbd.sock 00:06:02.939 00:31:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1204457 ']' 00:06:02.939 00:31:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.939 00:31:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.939 00:31:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.939 00:31:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.939 00:31:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.939 00:31:14 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.939 00:31:14 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:02.939 00:31:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.939 Malloc0 00:06:02.939 00:31:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.199 Malloc1 00:06:03.199 00:31:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.199 /dev/nbd0 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.199 00:31:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:03.199 00:31:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:03.199 00:31:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:03.199 00:31:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:03.199 00:31:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:03.199 00:31:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:03.199 00:31:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:03.199 00:31:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:03.199 00:31:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.199 1+0 records in 00:06:03.199 1+0 records out 00:06:03.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176647 s, 23.2 MB/s 00:06:03.199 00:31:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.199 00:31:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:03.199 00:31:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.199 00:31:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:03.199 00:31:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.199 00:31:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.200 00:31:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.459 /dev/nbd1 00:06:03.459 00:31:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.459 00:31:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.459 00:31:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:03.459 00:31:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:03.459 00:31:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:03.459 00:31:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:03.459 00:31:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:03.459 00:31:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:03.459 00:31:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:03.459 00:31:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:03.459 00:31:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.459 1+0 records in 00:06:03.459 1+0 records out 00:06:03.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230939 s, 17.7 MB/s 00:06:03.459 00:31:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.459 00:31:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:03.459 00:31:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.459 00:31:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:03.459 00:31:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:03.459 00:31:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.459 00:31:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.459 00:31:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.459 00:31:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.459 00:31:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.719 { 00:06:03.719 "nbd_device": "/dev/nbd0", 00:06:03.719 "bdev_name": "Malloc0" 00:06:03.719 }, 00:06:03.719 { 00:06:03.719 "nbd_device": "/dev/nbd1", 00:06:03.719 "bdev_name": "Malloc1" 00:06:03.719 } 00:06:03.719 ]' 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.719 { 00:06:03.719 "nbd_device": "/dev/nbd0", 00:06:03.719 "bdev_name": "Malloc0" 00:06:03.719 }, 00:06:03.719 { 00:06:03.719 "nbd_device": "/dev/nbd1", 00:06:03.719 "bdev_name": "Malloc1" 00:06:03.719 } 00:06:03.719 ]' 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.719 /dev/nbd1' 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.719 /dev/nbd1' 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.719 256+0 records in 00:06:03.719 256+0 records out 00:06:03.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010329 s, 102 MB/s 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.719 256+0 records in 00:06:03.719 256+0 records out 00:06:03.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130783 s, 80.2 MB/s 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.719 256+0 records in 00:06:03.719 256+0 records out 00:06:03.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144704 s, 72.5 MB/s 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.719 00:31:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.981 00:31:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.981 00:31:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.981 00:31:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.981 00:31:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.981 00:31:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.981 00:31:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.981 00:31:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.981 00:31:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.981 00:31:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.981 00:31:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.240 00:31:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.240 00:31:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.240 00:31:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.240 00:31:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.240 00:31:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.240 00:31:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.240 00:31:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.240 00:31:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.240 00:31:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.240 00:31:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.240 00:31:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.499 00:31:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.499 00:31:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.499 00:31:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.499 00:31:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.499 00:31:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.499 00:31:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.499 00:31:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.499 00:31:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.499 00:31:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.499 00:31:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.499 00:31:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.499 00:31:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.499 00:31:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.758 00:31:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.758 [2024-07-13 00:31:16.256427] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.758 [2024-07-13 00:31:16.292636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.758 [2024-07-13 00:31:16.292637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.017 [2024-07-13 00:31:16.334183] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.017 [2024-07-13 00:31:16.334238] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.553 00:31:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.554 00:31:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:07.554 spdk_app_start Round 2 00:06:07.554 00:31:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1204457 /var/tmp/spdk-nbd.sock 00:06:07.554 00:31:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1204457 ']' 00:06:07.554 00:31:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.554 00:31:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.554 00:31:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.554 00:31:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.554 00:31:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.813 00:31:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.813 00:31:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:07.813 00:31:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.072 Malloc0 00:06:08.072 00:31:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.331 Malloc1 00:06:08.331 00:31:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.331 /dev/nbd0 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.331 00:31:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:08.331 00:31:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:08.331 00:31:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:08.331 00:31:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:08.331 00:31:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:08.331 00:31:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:08.331 00:31:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:08.331 00:31:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:08.331 00:31:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.331 1+0 records in 00:06:08.331 1+0 records out 00:06:08.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230095 s, 17.8 MB/s 00:06:08.331 00:31:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.331 00:31:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:08.331 00:31:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.331 00:31:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:08.331 00:31:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.331 00:31:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.590 /dev/nbd1 00:06:08.590 00:31:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.590 00:31:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.590 00:31:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:08.590 00:31:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:08.590 00:31:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:08.590 00:31:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:08.590 00:31:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:08.590 00:31:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:08.590 00:31:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:08.590 00:31:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:08.590 00:31:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.590 1+0 records in 00:06:08.590 1+0 records out 00:06:08.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198081 s, 20.7 MB/s 00:06:08.590 00:31:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.590 00:31:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:08.590 00:31:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.590 00:31:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:08.590 00:31:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:08.590 00:31:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.590 00:31:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.590 00:31:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.590 00:31:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.590 00:31:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.850 { 00:06:08.850 "nbd_device": "/dev/nbd0", 00:06:08.850 "bdev_name": "Malloc0" 00:06:08.850 }, 00:06:08.850 { 00:06:08.850 "nbd_device": "/dev/nbd1", 00:06:08.850 "bdev_name": "Malloc1" 00:06:08.850 } 00:06:08.850 ]' 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.850 { 00:06:08.850 "nbd_device": "/dev/nbd0", 00:06:08.850 "bdev_name": "Malloc0" 00:06:08.850 }, 00:06:08.850 { 00:06:08.850 "nbd_device": "/dev/nbd1", 00:06:08.850 "bdev_name": "Malloc1" 00:06:08.850 } 00:06:08.850 ]' 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.850 /dev/nbd1' 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.850 /dev/nbd1' 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.850 256+0 records in 00:06:08.850 256+0 records out 00:06:08.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430043 s, 244 MB/s 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.850 256+0 records in 00:06:08.850 256+0 records out 00:06:08.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142642 s, 73.5 MB/s 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.850 256+0 records in 00:06:08.850 256+0 records out 00:06:08.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148433 s, 70.6 MB/s 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.850 00:31:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.109 00:31:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.109 00:31:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.109 00:31:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.109 00:31:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.109 00:31:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.109 00:31:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.109 00:31:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.109 00:31:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.109 00:31:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.109 00:31:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.368 00:31:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.368 00:31:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.368 00:31:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.368 00:31:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.368 00:31:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.368 00:31:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.368 00:31:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.368 00:31:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.369 00:31:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.369 00:31:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.369 00:31:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.627 00:31:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.627 00:31:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.627 00:31:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.627 00:31:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.627 00:31:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.627 00:31:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.627 00:31:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:09.627 00:31:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.627 00:31:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.627 00:31:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.627 00:31:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.627 00:31:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.627 00:31:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.885 00:31:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.885 [2024-07-13 00:31:21.390282] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.885 [2024-07-13 00:31:21.426454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.885 [2024-07-13 00:31:21.426455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.144 [2024-07-13 00:31:21.466792] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.144 [2024-07-13 00:31:21.466833] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.678 00:31:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1204457 /var/tmp/spdk-nbd.sock 00:06:12.678 00:31:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1204457 ']' 00:06:12.678 00:31:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.678 00:31:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.678 00:31:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.678 00:31:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.678 00:31:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.936 00:31:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.936 00:31:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:12.936 00:31:24 event.app_repeat -- event/event.sh@39 -- # killprocess 1204457 00:06:12.936 00:31:24 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1204457 ']' 00:06:12.936 00:31:24 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1204457 00:06:12.936 00:31:24 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:12.936 00:31:24 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.936 00:31:24 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1204457 00:06:12.936 00:31:24 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.936 00:31:24 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.936 00:31:24 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1204457' 00:06:12.936 killing process with pid 1204457 00:06:12.936 00:31:24 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1204457 00:06:12.936 00:31:24 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1204457 00:06:13.196 spdk_app_start is called in Round 0. 00:06:13.196 Shutdown signal received, stop current app iteration 00:06:13.196 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:13.196 spdk_app_start is called in Round 1. 00:06:13.196 Shutdown signal received, stop current app iteration 00:06:13.196 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:13.196 spdk_app_start is called in Round 2. 00:06:13.196 Shutdown signal received, stop current app iteration 00:06:13.196 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:13.196 spdk_app_start is called in Round 3. 00:06:13.196 Shutdown signal received, stop current app iteration 00:06:13.196 00:31:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:13.196 00:31:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:13.196 00:06:13.196 real 0m15.819s 00:06:13.196 user 0m34.556s 00:06:13.196 sys 0m2.310s 00:06:13.196 00:31:24 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.196 00:31:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.196 ************************************ 00:06:13.196 END TEST app_repeat 00:06:13.196 ************************************ 00:06:13.196 00:31:24 event -- common/autotest_common.sh@1142 -- # return 0 00:06:13.196 00:31:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:13.196 00:31:24 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:13.196 00:31:24 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.196 00:31:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.196 00:31:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.196 ************************************ 00:06:13.196 START TEST cpu_locks 00:06:13.196 ************************************ 00:06:13.196 00:31:24 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:13.455 * Looking for test storage... 00:06:13.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:13.455 00:31:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:13.455 00:31:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:13.455 00:31:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:13.455 00:31:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:13.455 00:31:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.455 00:31:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.455 00:31:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.455 ************************************ 00:06:13.455 START TEST default_locks 00:06:13.455 ************************************ 00:06:13.455 00:31:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:13.455 00:31:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1207224 00:06:13.455 00:31:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1207224 00:06:13.455 00:31:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.455 00:31:24 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1207224 ']' 00:06:13.455 00:31:24 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.455 00:31:24 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.455 00:31:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.455 00:31:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.455 00:31:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.455 [2024-07-13 00:31:24.860143] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:13.455 [2024-07-13 00:31:24.860186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207224 ] 00:06:13.455 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.455 [2024-07-13 00:31:24.930441] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.455 [2024-07-13 00:31:24.971599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.713 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.713 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:13.713 00:31:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1207224 00:06:13.713 00:31:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1207224 00:06:13.713 00:31:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.281 lslocks: write error 00:06:14.281 00:31:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1207224 00:06:14.281 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1207224 ']' 00:06:14.281 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1207224 00:06:14.281 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:14.281 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.281 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1207224 00:06:14.281 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.281 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.281 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1207224' 00:06:14.281 killing process with pid 1207224 00:06:14.281 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1207224 00:06:14.281 00:31:25 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1207224 00:06:14.540 00:31:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1207224 00:06:14.540 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:14.540 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1207224 00:06:14.540 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:14.540 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.540 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:14.540 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.540 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1207224 00:06:14.540 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1207224 ']' 00:06:14.540 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.540 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.540 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.540 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.540 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1207224) - No such process 00:06:14.540 ERROR: process (pid: 1207224) is no longer running 00:06:14.540 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.541 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:14.541 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:14.541 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.541 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:14.541 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.541 00:31:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:14.541 00:31:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.541 00:31:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.541 00:31:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.541 00:06:14.541 real 0m1.199s 00:06:14.541 user 0m1.155s 00:06:14.541 sys 0m0.534s 00:06:14.541 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.541 00:31:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.541 ************************************ 00:06:14.541 END TEST default_locks 00:06:14.541 ************************************ 00:06:14.541 00:31:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:14.541 00:31:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:14.541 00:31:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.541 00:31:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.541 00:31:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.541 ************************************ 00:06:14.541 START TEST default_locks_via_rpc 00:06:14.541 ************************************ 00:06:14.541 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:14.541 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1207482 00:06:14.541 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1207482 00:06:14.541 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.541 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1207482 ']' 00:06:14.541 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.541 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.541 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.541 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.541 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.800 [2024-07-13 00:31:26.124235] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:14.800 [2024-07-13 00:31:26.124277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207482 ] 00:06:14.800 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.800 [2024-07-13 00:31:26.192204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.800 [2024-07-13 00:31:26.232973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1207482 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1207482 00:06:15.058 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.318 00:31:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1207482 00:06:15.318 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1207482 ']' 00:06:15.318 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1207482 00:06:15.318 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:15.318 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.318 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1207482 00:06:15.318 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.318 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.318 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1207482' 00:06:15.318 killing process with pid 1207482 00:06:15.318 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1207482 00:06:15.318 00:31:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1207482 00:06:15.577 00:06:15.577 real 0m0.965s 00:06:15.577 user 0m0.894s 00:06:15.577 sys 0m0.462s 00:06:15.577 00:31:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.577 00:31:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.577 ************************************ 00:06:15.577 END TEST default_locks_via_rpc 00:06:15.577 ************************************ 00:06:15.577 00:31:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:15.577 00:31:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:15.577 00:31:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.577 00:31:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.577 00:31:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.577 ************************************ 00:06:15.577 START TEST non_locking_app_on_locked_coremask 00:06:15.577 ************************************ 00:06:15.577 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:15.577 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1207736 00:06:15.577 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1207736 /var/tmp/spdk.sock 00:06:15.577 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.577 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1207736 ']' 00:06:15.577 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.577 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.577 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.577 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.577 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.835 [2024-07-13 00:31:27.163454] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:15.836 [2024-07-13 00:31:27.163496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207736 ] 00:06:15.836 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.836 [2024-07-13 00:31:27.228569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.836 [2024-07-13 00:31:27.269338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.095 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.095 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:16.095 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1207743 00:06:16.095 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:16.095 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1207743 /var/tmp/spdk2.sock 00:06:16.095 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1207743 ']' 00:06:16.095 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.095 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.095 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.095 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.095 00:31:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.095 [2024-07-13 00:31:27.503871] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:16.095 [2024-07-13 00:31:27.503917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207743 ] 00:06:16.095 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.095 [2024-07-13 00:31:27.580885] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.095 [2024-07-13 00:31:27.580911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.380 [2024-07-13 00:31:27.661120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.948 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.948 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:16.948 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1207736 00:06:16.948 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1207736 00:06:16.948 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.207 lslocks: write error 00:06:17.207 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1207736 00:06:17.207 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1207736 ']' 00:06:17.207 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1207736 00:06:17.207 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:17.207 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.207 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1207736 00:06:17.466 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.466 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.466 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1207736' 00:06:17.466 killing process with pid 1207736 00:06:17.466 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1207736 00:06:17.466 00:31:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1207736 00:06:18.033 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1207743 00:06:18.033 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1207743 ']' 00:06:18.033 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1207743 00:06:18.033 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:18.033 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.033 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1207743 00:06:18.033 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.033 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.033 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1207743' 00:06:18.033 killing process with pid 1207743 00:06:18.033 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1207743 00:06:18.033 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1207743 00:06:18.292 00:06:18.292 real 0m2.626s 00:06:18.292 user 0m2.721s 00:06:18.292 sys 0m0.869s 00:06:18.292 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.292 00:31:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.292 ************************************ 00:06:18.292 END TEST non_locking_app_on_locked_coremask 00:06:18.292 ************************************ 00:06:18.292 00:31:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:18.292 00:31:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:18.292 00:31:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.292 00:31:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.292 00:31:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.292 ************************************ 00:06:18.292 START TEST locking_app_on_unlocked_coremask 00:06:18.292 ************************************ 00:06:18.292 00:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:18.292 00:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1208238 00:06:18.292 00:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1208238 /var/tmp/spdk.sock 00:06:18.292 00:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:18.292 00:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1208238 ']' 00:06:18.292 00:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.292 00:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.292 00:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.292 00:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.292 00:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.550 [2024-07-13 00:31:29.856663] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:18.550 [2024-07-13 00:31:29.856706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208238 ] 00:06:18.550 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.550 [2024-07-13 00:31:29.923885] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.550 [2024-07-13 00:31:29.923909] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.550 [2024-07-13 00:31:29.964372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.809 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.809 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:18.809 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1208246 00:06:18.809 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1208246 /var/tmp/spdk2.sock 00:06:18.809 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.809 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1208246 ']' 00:06:18.809 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.809 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.809 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.809 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.809 00:31:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.809 [2024-07-13 00:31:30.203942] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:18.809 [2024-07-13 00:31:30.203992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208246 ] 00:06:18.809 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.809 [2024-07-13 00:31:30.277083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.809 [2024-07-13 00:31:30.356950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.745 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.745 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:19.745 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1208246 00:06:19.745 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1208246 00:06:19.745 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.004 lslocks: write error 00:06:20.004 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1208238 00:06:20.004 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1208238 ']' 00:06:20.004 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1208238 00:06:20.004 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:20.004 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.004 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1208238 00:06:20.004 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.004 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.004 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1208238' 00:06:20.004 killing process with pid 1208238 00:06:20.004 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1208238 00:06:20.004 00:31:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1208238 00:06:20.944 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1208246 00:06:20.944 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1208246 ']' 00:06:20.944 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1208246 00:06:20.944 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:20.944 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.944 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1208246 00:06:20.944 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.944 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.944 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1208246' 00:06:20.944 killing process with pid 1208246 00:06:20.944 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1208246 00:06:20.944 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1208246 00:06:20.944 00:06:20.944 real 0m2.694s 00:06:20.944 user 0m2.753s 00:06:20.944 sys 0m0.928s 00:06:20.944 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.944 00:31:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.944 ************************************ 00:06:20.944 END TEST locking_app_on_unlocked_coremask 00:06:20.944 ************************************ 00:06:21.203 00:31:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:21.203 00:31:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:21.203 00:31:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.203 00:31:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.203 00:31:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.203 ************************************ 00:06:21.203 START TEST locking_app_on_locked_coremask 00:06:21.203 ************************************ 00:06:21.203 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:21.203 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1208734 00:06:21.203 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.203 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1208734 /var/tmp/spdk.sock 00:06:21.203 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1208734 ']' 00:06:21.203 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.203 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.203 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.203 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.203 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.203 [2024-07-13 00:31:32.607390] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:21.203 [2024-07-13 00:31:32.607426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208734 ] 00:06:21.203 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.203 [2024-07-13 00:31:32.673217] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.203 [2024-07-13 00:31:32.714346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1208743 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1208743 /var/tmp/spdk2.sock 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1208743 /var/tmp/spdk2.sock 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1208743 /var/tmp/spdk2.sock 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1208743 ']' 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.469 00:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.469 [2024-07-13 00:31:32.943013] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:21.469 [2024-07-13 00:31:32.943060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208743 ] 00:06:21.469 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.469 [2024-07-13 00:31:33.014804] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1208734 has claimed it. 00:06:21.469 [2024-07-13 00:31:33.014833] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:22.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1208743) - No such process 00:06:22.035 ERROR: process (pid: 1208743) is no longer running 00:06:22.035 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.035 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:22.035 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:22.035 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.035 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.035 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.035 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1208734 00:06:22.035 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1208734 00:06:22.035 00:31:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.601 lslocks: write error 00:06:22.601 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1208734 00:06:22.601 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1208734 ']' 00:06:22.601 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1208734 00:06:22.601 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:22.601 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.601 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1208734 00:06:22.601 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.601 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.601 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1208734' 00:06:22.601 killing process with pid 1208734 00:06:22.601 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1208734 00:06:22.601 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1208734 00:06:22.859 00:06:22.859 real 0m1.824s 00:06:22.859 user 0m1.936s 00:06:22.859 sys 0m0.620s 00:06:22.860 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.860 00:31:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.860 ************************************ 00:06:22.860 END TEST locking_app_on_locked_coremask 00:06:22.860 ************************************ 00:06:23.118 00:31:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:23.118 00:31:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:23.118 00:31:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.118 00:31:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.118 00:31:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.118 ************************************ 00:06:23.118 START TEST locking_overlapped_coremask 00:06:23.118 ************************************ 00:06:23.118 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:23.118 00:31:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1209002 00:06:23.118 00:31:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1209002 /var/tmp/spdk.sock 00:06:23.118 00:31:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:23.118 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1209002 ']' 00:06:23.118 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.118 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.118 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.118 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.118 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.118 [2024-07-13 00:31:34.510481] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:23.118 [2024-07-13 00:31:34.510527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1209002 ] 00:06:23.118 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.118 [2024-07-13 00:31:34.575621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.118 [2024-07-13 00:31:34.615549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.118 [2024-07-13 00:31:34.615660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.118 [2024-07-13 00:31:34.615660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.376 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.376 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:23.376 00:31:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1209183 00:06:23.376 00:31:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1209183 /var/tmp/spdk2.sock 00:06:23.377 00:31:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:23.377 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:23.377 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1209183 /var/tmp/spdk2.sock 00:06:23.377 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:23.377 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.377 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:23.377 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.377 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1209183 /var/tmp/spdk2.sock 00:06:23.377 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1209183 ']' 00:06:23.377 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.377 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.377 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.377 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.377 00:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.377 [2024-07-13 00:31:34.864331] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:23.377 [2024-07-13 00:31:34.864379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1209183 ] 00:06:23.377 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.635 [2024-07-13 00:31:34.940015] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1209002 has claimed it. 00:06:23.635 [2024-07-13 00:31:34.940051] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:24.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1209183) - No such process 00:06:24.204 ERROR: process (pid: 1209183) is no longer running 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1209002 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1209002 ']' 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1209002 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1209002 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1209002' 00:06:24.204 killing process with pid 1209002 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1209002 00:06:24.204 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1209002 00:06:24.464 00:06:24.464 real 0m1.388s 00:06:24.464 user 0m3.753s 00:06:24.464 sys 0m0.389s 00:06:24.464 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.464 00:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.464 ************************************ 00:06:24.464 END TEST locking_overlapped_coremask 00:06:24.464 ************************************ 00:06:24.464 00:31:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:24.464 00:31:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:24.464 00:31:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.464 00:31:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.464 00:31:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.464 ************************************ 00:06:24.464 START TEST locking_overlapped_coremask_via_rpc 00:06:24.464 ************************************ 00:06:24.464 00:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:24.464 00:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1209278 00:06:24.464 00:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1209278 /var/tmp/spdk.sock 00:06:24.464 00:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:24.464 00:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1209278 ']' 00:06:24.464 00:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.464 00:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.464 00:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.464 00:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.464 00:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.464 [2024-07-13 00:31:35.966909] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:24.464 [2024-07-13 00:31:35.966953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1209278 ] 00:06:24.464 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.723 [2024-07-13 00:31:36.035827] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.723 [2024-07-13 00:31:36.035852] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.723 [2024-07-13 00:31:36.077283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.723 [2024-07-13 00:31:36.077389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.723 [2024-07-13 00:31:36.077390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.723 00:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.723 00:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:24.723 00:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1209493 00:06:24.723 00:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1209493 /var/tmp/spdk2.sock 00:06:24.723 00:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:24.723 00:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1209493 ']' 00:06:24.723 00:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.723 00:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.723 00:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.723 00:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.723 00:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.983 [2024-07-13 00:31:36.319521] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:24.983 [2024-07-13 00:31:36.319570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1209493 ] 00:06:24.983 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.983 [2024-07-13 00:31:36.396907] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.983 [2024-07-13 00:31:36.396936] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.983 [2024-07-13 00:31:36.478353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.983 [2024-07-13 00:31:36.482273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.983 [2024-07-13 00:31:36.482274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.922 [2024-07-13 00:31:37.147291] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1209278 has claimed it. 00:06:25.922 request: 00:06:25.922 { 00:06:25.922 "method": "framework_enable_cpumask_locks", 00:06:25.922 "req_id": 1 00:06:25.922 } 00:06:25.922 Got JSON-RPC error response 00:06:25.922 response: 00:06:25.922 { 00:06:25.922 "code": -32603, 00:06:25.922 "message": "Failed to claim CPU core: 2" 00:06:25.922 } 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1209278 /var/tmp/spdk.sock 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1209278 ']' 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1209493 /var/tmp/spdk2.sock 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1209493 ']' 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.922 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.182 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.182 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:26.182 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:26.182 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:26.182 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:26.182 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:26.182 00:06:26.182 real 0m1.612s 00:06:26.182 user 0m0.746s 00:06:26.182 sys 0m0.138s 00:06:26.182 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.182 00:31:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.182 ************************************ 00:06:26.182 END TEST locking_overlapped_coremask_via_rpc 00:06:26.182 ************************************ 00:06:26.182 00:31:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:26.182 00:31:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:26.182 00:31:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1209278 ]] 00:06:26.182 00:31:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1209278 00:06:26.182 00:31:37 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1209278 ']' 00:06:26.182 00:31:37 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1209278 00:06:26.182 00:31:37 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:26.182 00:31:37 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.182 00:31:37 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1209278 00:06:26.182 00:31:37 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.182 00:31:37 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.182 00:31:37 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1209278' 00:06:26.182 killing process with pid 1209278 00:06:26.182 00:31:37 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1209278 00:06:26.182 00:31:37 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1209278 00:06:26.441 00:31:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1209493 ]] 00:06:26.441 00:31:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1209493 00:06:26.441 00:31:37 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1209493 ']' 00:06:26.441 00:31:37 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1209493 00:06:26.441 00:31:37 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:26.441 00:31:37 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.441 00:31:37 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1209493 00:06:26.441 00:31:37 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:26.441 00:31:37 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:26.441 00:31:37 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1209493' 00:06:26.441 killing process with pid 1209493 00:06:26.441 00:31:37 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1209493 00:06:26.441 00:31:37 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1209493 00:06:26.700 00:31:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.700 00:31:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:26.700 00:31:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1209278 ]] 00:06:26.700 00:31:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1209278 00:06:26.700 00:31:38 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1209278 ']' 00:06:26.700 00:31:38 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1209278 00:06:26.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1209278) - No such process 00:06:26.700 00:31:38 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1209278 is not found' 00:06:26.700 Process with pid 1209278 is not found 00:06:26.700 00:31:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1209493 ]] 00:06:26.700 00:31:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1209493 00:06:26.700 00:31:38 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1209493 ']' 00:06:26.700 00:31:38 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1209493 00:06:26.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1209493) - No such process 00:06:26.700 00:31:38 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1209493 is not found' 00:06:26.700 Process with pid 1209493 is not found 00:06:26.700 00:31:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.700 00:06:26.700 real 0m13.569s 00:06:26.700 user 0m23.042s 00:06:26.700 sys 0m4.833s 00:06:26.700 00:31:38 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.700 00:31:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.700 ************************************ 00:06:26.700 END TEST cpu_locks 00:06:26.700 ************************************ 00:06:26.959 00:31:38 event -- common/autotest_common.sh@1142 -- # return 0 00:06:26.959 00:06:26.959 real 0m37.162s 00:06:26.959 user 1m9.706s 00:06:26.959 sys 0m8.072s 00:06:26.959 00:31:38 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.959 00:31:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.959 ************************************ 00:06:26.959 END TEST event 00:06:26.959 ************************************ 00:06:26.959 00:31:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:26.959 00:31:38 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:26.959 00:31:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.959 00:31:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.959 00:31:38 -- common/autotest_common.sh@10 -- # set +x 00:06:26.959 ************************************ 00:06:26.959 START TEST thread 00:06:26.959 ************************************ 00:06:26.959 00:31:38 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:26.959 * Looking for test storage... 00:06:26.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:26.959 00:31:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.959 00:31:38 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:26.959 00:31:38 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.959 00:31:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.959 ************************************ 00:06:26.959 START TEST thread_poller_perf 00:06:26.959 ************************************ 00:06:26.959 00:31:38 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.959 [2024-07-13 00:31:38.497508] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:26.959 [2024-07-13 00:31:38.497574] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1209830 ] 00:06:27.218 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.218 [2024-07-13 00:31:38.565391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.218 [2024-07-13 00:31:38.605130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.218 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:28.155 ====================================== 00:06:28.155 busy:2309150156 (cyc) 00:06:28.155 total_run_count: 408000 00:06:28.155 tsc_hz: 2300000000 (cyc) 00:06:28.155 ====================================== 00:06:28.155 poller_cost: 5659 (cyc), 2460 (nsec) 00:06:28.155 00:06:28.155 real 0m1.197s 00:06:28.155 user 0m1.110s 00:06:28.155 sys 0m0.082s 00:06:28.155 00:31:39 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.155 00:31:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:28.155 ************************************ 00:06:28.155 END TEST thread_poller_perf 00:06:28.155 ************************************ 00:06:28.155 00:31:39 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:28.155 00:31:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:28.155 00:31:39 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:28.155 00:31:39 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.155 00:31:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.414 ************************************ 00:06:28.414 START TEST thread_poller_perf 00:06:28.414 ************************************ 00:06:28.414 00:31:39 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:28.414 [2024-07-13 00:31:39.766756] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:28.414 [2024-07-13 00:31:39.766820] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1210081 ] 00:06:28.414 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.414 [2024-07-13 00:31:39.838376] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.414 [2024-07-13 00:31:39.878582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.414 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:29.794 ====================================== 00:06:29.794 busy:2301647894 (cyc) 00:06:29.794 total_run_count: 5273000 00:06:29.794 tsc_hz: 2300000000 (cyc) 00:06:29.794 ====================================== 00:06:29.794 poller_cost: 436 (cyc), 189 (nsec) 00:06:29.794 00:06:29.794 real 0m1.199s 00:06:29.794 user 0m1.109s 00:06:29.794 sys 0m0.085s 00:06:29.794 00:31:40 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.794 00:31:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.794 ************************************ 00:06:29.794 END TEST thread_poller_perf 00:06:29.794 ************************************ 00:06:29.794 00:31:40 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:29.794 00:31:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:29.794 00:06:29.794 real 0m2.624s 00:06:29.794 user 0m2.310s 00:06:29.794 sys 0m0.322s 00:06:29.794 00:31:40 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.794 00:31:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.794 ************************************ 00:06:29.794 END TEST thread 00:06:29.794 ************************************ 00:06:29.794 00:31:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.794 00:31:41 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:29.794 00:31:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.794 00:31:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.794 00:31:41 -- common/autotest_common.sh@10 -- # set +x 00:06:29.794 ************************************ 00:06:29.794 START TEST accel 00:06:29.794 ************************************ 00:06:29.794 00:31:41 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:29.794 * Looking for test storage... 00:06:29.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:29.794 00:31:41 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:29.794 00:31:41 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:29.794 00:31:41 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:29.794 00:31:41 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1210373 00:06:29.794 00:31:41 accel -- accel/accel.sh@63 -- # waitforlisten 1210373 00:06:29.794 00:31:41 accel -- common/autotest_common.sh@829 -- # '[' -z 1210373 ']' 00:06:29.794 00:31:41 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.794 00:31:41 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:29.794 00:31:41 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.794 00:31:41 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:29.794 00:31:41 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.794 00:31:41 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.794 00:31:41 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.794 00:31:41 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.794 00:31:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.794 00:31:41 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.794 00:31:41 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.794 00:31:41 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.794 00:31:41 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:29.794 00:31:41 accel -- accel/accel.sh@41 -- # jq -r . 00:06:29.794 [2024-07-13 00:31:41.191346] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:29.794 [2024-07-13 00:31:41.191394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1210373 ] 00:06:29.794 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.794 [2024-07-13 00:31:41.259545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.794 [2024-07-13 00:31:41.298986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.732 00:31:41 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.732 00:31:41 accel -- common/autotest_common.sh@862 -- # return 0 00:06:30.732 00:31:41 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:30.732 00:31:41 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:30.732 00:31:41 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:30.732 00:31:41 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:30.732 00:31:41 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:30.732 00:31:41 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:30.732 00:31:41 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:30.732 00:31:41 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.732 00:31:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.733 00:31:41 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.733 00:31:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:30.733 00:31:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:30.733 00:31:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:30.733 00:31:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:30.733 00:31:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:30.733 00:31:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:30.733 00:31:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:30.733 00:31:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:30.733 00:31:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:30.733 00:31:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:30.733 00:31:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:30.733 00:31:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:30.733 00:31:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:30.733 00:31:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:30.733 00:31:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:30.733 00:31:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:30.733 00:31:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:30.733 00:31:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:30.733 00:31:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:30.733 00:31:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:30.733 00:31:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:30.733 00:31:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:30.733 00:31:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:30.733 00:31:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:30.733 00:31:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:30.733 00:31:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:30.733 00:31:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:30.733 00:31:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:30.733 00:31:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # IFS== 00:06:30.733 00:31:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:30.733 00:31:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:30.733 00:31:42 accel -- accel/accel.sh@75 -- # killprocess 1210373 00:06:30.733 00:31:42 accel -- common/autotest_common.sh@948 -- # '[' -z 1210373 ']' 00:06:30.733 00:31:42 accel -- common/autotest_common.sh@952 -- # kill -0 1210373 00:06:30.733 00:31:42 accel -- common/autotest_common.sh@953 -- # uname 00:06:30.733 00:31:42 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.733 00:31:42 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1210373 00:06:30.733 00:31:42 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.733 00:31:42 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.733 00:31:42 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1210373' 00:06:30.733 killing process with pid 1210373 00:06:30.733 00:31:42 accel -- common/autotest_common.sh@967 -- # kill 1210373 00:06:30.733 00:31:42 accel -- common/autotest_common.sh@972 -- # wait 1210373 00:06:30.992 00:31:42 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:30.992 00:31:42 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:30.992 00:31:42 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:30.992 00:31:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.992 00:31:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.992 00:31:42 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:30.992 00:31:42 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:30.992 00:31:42 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:30.992 00:31:42 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.992 00:31:42 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.992 00:31:42 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.992 00:31:42 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.992 00:31:42 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.992 00:31:42 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:30.992 00:31:42 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:30.992 00:31:42 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.992 00:31:42 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:30.992 00:31:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.992 00:31:42 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:30.992 00:31:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:30.992 00:31:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.992 00:31:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.992 ************************************ 00:06:30.992 START TEST accel_missing_filename 00:06:30.992 ************************************ 00:06:30.992 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:30.992 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:30.992 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:30.992 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.992 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.992 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.992 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.992 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:30.992 00:31:42 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:30.992 00:31:42 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:30.992 00:31:42 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.992 00:31:42 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.992 00:31:42 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.992 00:31:42 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.992 00:31:42 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.992 00:31:42 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:30.992 00:31:42 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:30.992 [2024-07-13 00:31:42.532443] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:30.992 [2024-07-13 00:31:42.532498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1210637 ] 00:06:31.252 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.252 [2024-07-13 00:31:42.601912] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.252 [2024-07-13 00:31:42.641344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.252 [2024-07-13 00:31:42.682299] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:31.252 [2024-07-13 00:31:42.742295] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:31.252 A filename is required. 00:06:31.252 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:31.252 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:31.252 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:31.252 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:31.252 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:31.252 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:31.252 00:06:31.252 real 0m0.303s 00:06:31.252 user 0m0.215s 00:06:31.252 sys 0m0.123s 00:06:31.252 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.252 00:31:42 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:31.252 ************************************ 00:06:31.252 END TEST accel_missing_filename 00:06:31.252 ************************************ 00:06:31.511 00:31:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.511 00:31:42 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.511 00:31:42 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:31.511 00:31:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.511 00:31:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.511 ************************************ 00:06:31.511 START TEST accel_compress_verify 00:06:31.511 ************************************ 00:06:31.511 00:31:42 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.511 00:31:42 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:31.511 00:31:42 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.511 00:31:42 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:31.511 00:31:42 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.511 00:31:42 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:31.511 00:31:42 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.511 00:31:42 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.511 00:31:42 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.511 00:31:42 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:31.511 00:31:42 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.511 00:31:42 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.511 00:31:42 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.511 00:31:42 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.511 00:31:42 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.511 00:31:42 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:31.511 00:31:42 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:31.511 [2024-07-13 00:31:42.900710] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:31.511 [2024-07-13 00:31:42.900777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1210726 ] 00:06:31.511 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.511 [2024-07-13 00:31:42.971764] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.511 [2024-07-13 00:31:43.011544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.511 [2024-07-13 00:31:43.052508] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:31.770 [2024-07-13 00:31:43.112556] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:31.771 00:06:31.771 Compression does not support the verify option, aborting. 00:06:31.771 00:31:43 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:31.771 00:31:43 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:31.771 00:31:43 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:31.771 00:31:43 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:31.771 00:31:43 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:31.771 00:31:43 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:31.771 00:06:31.771 real 0m0.306s 00:06:31.771 user 0m0.224s 00:06:31.771 sys 0m0.123s 00:06:31.771 00:31:43 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.771 00:31:43 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:31.771 ************************************ 00:06:31.771 END TEST accel_compress_verify 00:06:31.771 ************************************ 00:06:31.771 00:31:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.771 00:31:43 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:31.771 00:31:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:31.771 00:31:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.771 00:31:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.771 ************************************ 00:06:31.771 START TEST accel_wrong_workload 00:06:31.771 ************************************ 00:06:31.771 00:31:43 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:31.771 00:31:43 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:31.771 00:31:43 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:31.771 00:31:43 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:31.771 00:31:43 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.771 00:31:43 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:31.771 00:31:43 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.771 00:31:43 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:31.771 00:31:43 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:31.771 00:31:43 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:31.771 00:31:43 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.771 00:31:43 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.771 00:31:43 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.771 00:31:43 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.771 00:31:43 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.771 00:31:43 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:31.771 00:31:43 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:31.771 Unsupported workload type: foobar 00:06:31.771 [2024-07-13 00:31:43.259954] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:31.771 accel_perf options: 00:06:31.771 [-h help message] 00:06:31.771 [-q queue depth per core] 00:06:31.771 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:31.771 [-T number of threads per core 00:06:31.771 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:31.771 [-t time in seconds] 00:06:31.771 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:31.771 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:31.771 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:31.771 [-l for compress/decompress workloads, name of uncompressed input file 00:06:31.771 [-S for crc32c workload, use this seed value (default 0) 00:06:31.771 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:31.771 [-f for fill workload, use this BYTE value (default 255) 00:06:31.771 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:31.771 [-y verify result if this switch is on] 00:06:31.771 [-a tasks to allocate per core (default: same value as -q)] 00:06:31.771 Can be used to spread operations across a wider range of memory. 00:06:31.771 00:31:43 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:31.771 00:31:43 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:31.771 00:31:43 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:31.771 00:31:43 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:31.771 00:06:31.771 real 0m0.030s 00:06:31.771 user 0m0.016s 00:06:31.771 sys 0m0.014s 00:06:31.771 00:31:43 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.771 00:31:43 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:31.771 ************************************ 00:06:31.771 END TEST accel_wrong_workload 00:06:31.771 ************************************ 00:06:31.771 Error: writing output failed: Broken pipe 00:06:31.771 00:31:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.771 00:31:43 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:31.771 00:31:43 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:31.771 00:31:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.771 00:31:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.030 ************************************ 00:06:32.030 START TEST accel_negative_buffers 00:06:32.030 ************************************ 00:06:32.030 00:31:43 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:32.030 00:31:43 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:32.030 00:31:43 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:32.030 00:31:43 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:32.030 00:31:43 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.030 00:31:43 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:32.030 00:31:43 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.030 00:31:43 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:32.030 00:31:43 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:32.030 00:31:43 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:32.030 00:31:43 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.030 00:31:43 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.030 00:31:43 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.030 00:31:43 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.030 00:31:43 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.030 00:31:43 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:32.030 00:31:43 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:32.030 -x option must be non-negative. 00:06:32.030 [2024-07-13 00:31:43.363708] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:32.030 accel_perf options: 00:06:32.030 [-h help message] 00:06:32.030 [-q queue depth per core] 00:06:32.030 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:32.030 [-T number of threads per core 00:06:32.030 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:32.030 [-t time in seconds] 00:06:32.030 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:32.030 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:32.030 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:32.030 [-l for compress/decompress workloads, name of uncompressed input file 00:06:32.030 [-S for crc32c workload, use this seed value (default 0) 00:06:32.031 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:32.031 [-f for fill workload, use this BYTE value (default 255) 00:06:32.031 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:32.031 [-y verify result if this switch is on] 00:06:32.031 [-a tasks to allocate per core (default: same value as -q)] 00:06:32.031 Can be used to spread operations across a wider range of memory. 00:06:32.031 00:31:43 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:32.031 00:31:43 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.031 00:31:43 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:32.031 00:31:43 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.031 00:06:32.031 real 0m0.035s 00:06:32.031 user 0m0.019s 00:06:32.031 sys 0m0.016s 00:06:32.031 00:31:43 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.031 00:31:43 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:32.031 ************************************ 00:06:32.031 END TEST accel_negative_buffers 00:06:32.031 ************************************ 00:06:32.031 Error: writing output failed: Broken pipe 00:06:32.031 00:31:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.031 00:31:43 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:32.031 00:31:43 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:32.031 00:31:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.031 00:31:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.031 ************************************ 00:06:32.031 START TEST accel_crc32c 00:06:32.031 ************************************ 00:06:32.031 00:31:43 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:32.031 00:31:43 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:32.031 00:31:43 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:32.031 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.031 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.031 00:31:43 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:32.031 00:31:43 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:32.031 00:31:43 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:32.031 00:31:43 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.031 00:31:43 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.031 00:31:43 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.031 00:31:43 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.031 00:31:43 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.031 00:31:43 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:32.031 00:31:43 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:32.031 [2024-07-13 00:31:43.461477] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:32.031 [2024-07-13 00:31:43.461531] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1210938 ] 00:06:32.031 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.031 [2024-07-13 00:31:43.528784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.031 [2024-07-13 00:31:43.572334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.291 00:31:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:33.230 00:31:44 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.230 00:06:33.230 real 0m1.309s 00:06:33.230 user 0m1.191s 00:06:33.230 sys 0m0.132s 00:06:33.230 00:31:44 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.230 00:31:44 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:33.230 ************************************ 00:06:33.230 END TEST accel_crc32c 00:06:33.230 ************************************ 00:06:33.230 00:31:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.230 00:31:44 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:33.230 00:31:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:33.230 00:31:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.230 00:31:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.490 ************************************ 00:06:33.490 START TEST accel_crc32c_C2 00:06:33.490 ************************************ 00:06:33.490 00:31:44 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:33.490 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:33.491 [2024-07-13 00:31:44.831031] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:33.491 [2024-07-13 00:31:44.831086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211189 ] 00:06:33.491 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.491 [2024-07-13 00:31:44.900336] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.491 [2024-07-13 00:31:44.939339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.491 00:31:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.873 00:06:34.873 real 0m1.301s 00:06:34.873 user 0m1.190s 00:06:34.873 sys 0m0.126s 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.873 00:31:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:34.873 ************************************ 00:06:34.873 END TEST accel_crc32c_C2 00:06:34.873 ************************************ 00:06:34.873 00:31:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.873 00:31:46 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:34.873 00:31:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:34.873 00:31:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.873 00:31:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.873 ************************************ 00:06:34.873 START TEST accel_copy 00:06:34.873 ************************************ 00:06:34.873 00:31:46 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:34.873 00:31:46 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:34.873 00:31:46 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:34.873 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.873 00:31:46 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:34.873 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.873 00:31:46 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:34.873 00:31:46 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:34.873 00:31:46 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.873 00:31:46 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.873 00:31:46 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.873 00:31:46 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.873 00:31:46 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:34.874 [2024-07-13 00:31:46.194558] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:34.874 [2024-07-13 00:31:46.194611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211443 ] 00:06:34.874 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.874 [2024-07-13 00:31:46.262724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.874 [2024-07-13 00:31:46.301667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.874 00:31:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:36.254 00:31:47 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.254 00:06:36.254 real 0m1.299s 00:06:36.254 user 0m1.185s 00:06:36.254 sys 0m0.128s 00:06:36.254 00:31:47 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.254 00:31:47 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:36.254 ************************************ 00:06:36.254 END TEST accel_copy 00:06:36.254 ************************************ 00:06:36.254 00:31:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.254 00:31:47 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.254 00:31:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:36.254 00:31:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.254 00:31:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.254 ************************************ 00:06:36.254 START TEST accel_fill 00:06:36.254 ************************************ 00:06:36.254 00:31:47 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:36.254 [2024-07-13 00:31:47.563857] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:36.254 [2024-07-13 00:31:47.563914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211691 ] 00:06:36.254 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.254 [2024-07-13 00:31:47.632776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.254 [2024-07-13 00:31:47.672618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.254 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.255 00:31:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:37.688 00:31:48 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.688 00:06:37.688 real 0m1.309s 00:06:37.688 user 0m1.199s 00:06:37.688 sys 0m0.123s 00:06:37.688 00:31:48 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.688 00:31:48 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:37.688 ************************************ 00:06:37.688 END TEST accel_fill 00:06:37.688 ************************************ 00:06:37.688 00:31:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.688 00:31:48 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:37.688 00:31:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:37.688 00:31:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.688 00:31:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.688 ************************************ 00:06:37.688 START TEST accel_copy_crc32c 00:06:37.688 ************************************ 00:06:37.688 00:31:48 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:37.688 00:31:48 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:37.688 00:31:48 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:37.688 00:31:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:48 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:37.688 00:31:48 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:37.688 00:31:48 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:37.688 00:31:48 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.688 00:31:48 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.688 00:31:48 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.688 00:31:48 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.688 00:31:48 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.688 00:31:48 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:37.688 00:31:48 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:37.688 [2024-07-13 00:31:48.936310] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:37.688 [2024-07-13 00:31:48.936384] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211945 ] 00:06:37.688 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.688 [2024-07-13 00:31:49.002914] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.688 [2024-07-13 00:31:49.042399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.688 00:31:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.068 00:06:39.068 real 0m1.307s 00:06:39.068 user 0m1.203s 00:06:39.068 sys 0m0.119s 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.068 00:31:50 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:39.068 ************************************ 00:06:39.068 END TEST accel_copy_crc32c 00:06:39.068 ************************************ 00:06:39.068 00:31:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.068 00:31:50 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:39.068 00:31:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:39.068 00:31:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.068 00:31:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.068 ************************************ 00:06:39.068 START TEST accel_copy_crc32c_C2 00:06:39.068 ************************************ 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:39.068 [2024-07-13 00:31:50.309336] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:39.068 [2024-07-13 00:31:50.309402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1212195 ] 00:06:39.068 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.068 [2024-07-13 00:31:50.377869] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.068 [2024-07-13 00:31:50.417077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:39.068 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.069 00:31:50 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.448 00:06:40.448 real 0m1.308s 00:06:40.448 user 0m1.197s 00:06:40.448 sys 0m0.126s 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.448 00:31:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:40.448 ************************************ 00:06:40.448 END TEST accel_copy_crc32c_C2 00:06:40.448 ************************************ 00:06:40.448 00:31:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.448 00:31:51 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:40.448 00:31:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:40.448 00:31:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.448 00:31:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.448 ************************************ 00:06:40.448 START TEST accel_dualcast 00:06:40.448 ************************************ 00:06:40.448 00:31:51 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:40.448 [2024-07-13 00:31:51.677810] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:40.448 [2024-07-13 00:31:51.677857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1212441 ] 00:06:40.448 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.448 [2024-07-13 00:31:51.744211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.448 [2024-07-13 00:31:51.783624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.448 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:40.449 00:31:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.825 00:31:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.826 00:31:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:41.826 00:31:52 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.826 00:06:41.826 real 0m1.304s 00:06:41.826 user 0m1.194s 00:06:41.826 sys 0m0.124s 00:06:41.826 00:31:52 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.826 00:31:52 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:41.826 ************************************ 00:06:41.826 END TEST accel_dualcast 00:06:41.826 ************************************ 00:06:41.826 00:31:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.826 00:31:52 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:41.826 00:31:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:41.826 00:31:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.826 00:31:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.826 ************************************ 00:06:41.826 START TEST accel_compare 00:06:41.826 ************************************ 00:06:41.826 00:31:53 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:41.826 [2024-07-13 00:31:53.050047] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:41.826 [2024-07-13 00:31:53.050113] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1212694 ] 00:06:41.826 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.826 [2024-07-13 00:31:53.119476] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.826 [2024-07-13 00:31:53.159328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.826 00:31:53 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:43.203 00:31:54 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.203 00:06:43.203 real 0m1.310s 00:06:43.203 user 0m1.198s 00:06:43.203 sys 0m0.124s 00:06:43.203 00:31:54 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.203 00:31:54 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:43.203 ************************************ 00:06:43.203 END TEST accel_compare 00:06:43.203 ************************************ 00:06:43.203 00:31:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.203 00:31:54 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:43.203 00:31:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:43.203 00:31:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.203 00:31:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.203 ************************************ 00:06:43.203 START TEST accel_xor 00:06:43.203 ************************************ 00:06:43.203 00:31:54 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:43.203 00:31:54 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:43.203 00:31:54 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:43.203 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.203 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.203 00:31:54 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:43.203 00:31:54 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:43.203 00:31:54 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:43.203 00:31:54 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.203 00:31:54 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:43.204 [2024-07-13 00:31:54.421740] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:43.204 [2024-07-13 00:31:54.421800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1212939 ] 00:06:43.204 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.204 [2024-07-13 00:31:54.493066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.204 [2024-07-13 00:31:54.533818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.204 00:31:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.579 00:06:44.579 real 0m1.312s 00:06:44.579 user 0m1.202s 00:06:44.579 sys 0m0.123s 00:06:44.579 00:31:55 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.579 00:31:55 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:44.579 ************************************ 00:06:44.579 END TEST accel_xor 00:06:44.579 ************************************ 00:06:44.579 00:31:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.579 00:31:55 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:44.579 00:31:55 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:44.579 00:31:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.579 00:31:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.579 ************************************ 00:06:44.579 START TEST accel_xor 00:06:44.579 ************************************ 00:06:44.579 00:31:55 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:44.579 [2024-07-13 00:31:55.801628] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:44.579 [2024-07-13 00:31:55.801695] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1213187 ] 00:06:44.579 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.579 [2024-07-13 00:31:55.871720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.579 [2024-07-13 00:31:55.912512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:44.579 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.580 00:31:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:45.961 00:31:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.961 00:06:45.961 real 0m1.312s 00:06:45.961 user 0m1.204s 00:06:45.961 sys 0m0.121s 00:06:45.961 00:31:57 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.961 00:31:57 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:45.961 ************************************ 00:06:45.961 END TEST accel_xor 00:06:45.961 ************************************ 00:06:45.961 00:31:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.961 00:31:57 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:45.961 00:31:57 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:45.961 00:31:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.961 00:31:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.961 ************************************ 00:06:45.961 START TEST accel_dif_verify 00:06:45.961 ************************************ 00:06:45.961 00:31:57 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:45.961 [2024-07-13 00:31:57.176268] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:45.961 [2024-07-13 00:31:57.176329] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1213440 ] 00:06:45.961 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.961 [2024-07-13 00:31:57.228785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.961 [2024-07-13 00:31:57.269720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.961 00:31:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:46.899 00:31:58 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.899 00:06:46.899 real 0m1.292s 00:06:46.899 user 0m1.194s 00:06:46.899 sys 0m0.112s 00:06:46.899 00:31:58 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.899 00:31:58 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:46.899 ************************************ 00:06:46.899 END TEST accel_dif_verify 00:06:46.899 ************************************ 00:06:47.159 00:31:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.159 00:31:58 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:47.159 00:31:58 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:47.159 00:31:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.159 00:31:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.159 ************************************ 00:06:47.159 START TEST accel_dif_generate 00:06:47.159 ************************************ 00:06:47.159 00:31:58 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:47.159 [2024-07-13 00:31:58.534872] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:47.159 [2024-07-13 00:31:58.534934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1213689 ] 00:06:47.159 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.159 [2024-07-13 00:31:58.601895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.159 [2024-07-13 00:31:58.640916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.159 00:31:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:48.540 00:31:59 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.540 00:06:48.540 real 0m1.305s 00:06:48.540 user 0m1.201s 00:06:48.540 sys 0m0.119s 00:06:48.540 00:31:59 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.540 00:31:59 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:48.540 ************************************ 00:06:48.540 END TEST accel_dif_generate 00:06:48.540 ************************************ 00:06:48.540 00:31:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.540 00:31:59 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:48.540 00:31:59 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:48.540 00:31:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.540 00:31:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.540 ************************************ 00:06:48.540 START TEST accel_dif_generate_copy 00:06:48.540 ************************************ 00:06:48.540 00:31:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:48.540 00:31:59 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:48.540 00:31:59 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:48.540 00:31:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.540 00:31:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.540 00:31:59 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:48.540 00:31:59 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:48.540 00:31:59 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:48.540 00:31:59 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.540 00:31:59 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.540 00:31:59 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.540 00:31:59 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.540 00:31:59 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.540 00:31:59 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:48.540 00:31:59 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:48.540 [2024-07-13 00:31:59.902165] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:48.540 [2024-07-13 00:31:59.902210] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1213936 ] 00:06:48.540 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.541 [2024-07-13 00:31:59.968990] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.541 [2024-07-13 00:32:00.008538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.541 00:32:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.921 00:06:49.921 real 0m1.304s 00:06:49.921 user 0m1.195s 00:06:49.921 sys 0m0.122s 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.921 00:32:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:49.921 ************************************ 00:06:49.921 END TEST accel_dif_generate_copy 00:06:49.921 ************************************ 00:06:49.921 00:32:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.921 00:32:01 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:49.921 00:32:01 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.921 00:32:01 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:49.921 00:32:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.921 00:32:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.921 ************************************ 00:06:49.921 START TEST accel_comp 00:06:49.921 ************************************ 00:06:49.921 00:32:01 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:49.921 [2024-07-13 00:32:01.274701] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:49.921 [2024-07-13 00:32:01.274768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214191 ] 00:06:49.921 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.921 [2024-07-13 00:32:01.325042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.921 [2024-07-13 00:32:01.365529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.921 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:49.922 00:32:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:51.299 00:32:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:51.299 00:32:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.299 00:32:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:51.299 00:32:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:51.299 00:32:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:51.299 00:32:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.299 00:32:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:51.299 00:32:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:51.299 00:32:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:51.299 00:32:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.299 00:32:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:51.299 00:32:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:51.299 00:32:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:51.299 00:32:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.299 00:32:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:51.300 00:32:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:51.300 00:32:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.300 00:32:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:51.300 00:32:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.300 00:06:51.300 real 0m1.293s 00:06:51.300 user 0m1.197s 00:06:51.300 sys 0m0.111s 00:06:51.300 00:32:02 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.300 00:32:02 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:51.300 ************************************ 00:06:51.300 END TEST accel_comp 00:06:51.300 ************************************ 00:06:51.300 00:32:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.300 00:32:02 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:51.300 00:32:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:51.300 00:32:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.300 00:32:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.300 ************************************ 00:06:51.300 START TEST accel_decomp 00:06:51.300 ************************************ 00:06:51.300 00:32:02 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:51.300 [2024-07-13 00:32:02.627576] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:51.300 [2024-07-13 00:32:02.627633] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214438 ] 00:06:51.300 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.300 [2024-07-13 00:32:02.697402] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.300 [2024-07-13 00:32:02.737630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.300 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.301 00:32:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:52.677 00:32:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.677 00:06:52.677 real 0m1.309s 00:06:52.677 user 0m1.198s 00:06:52.677 sys 0m0.124s 00:06:52.677 00:32:03 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.677 00:32:03 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:52.677 ************************************ 00:06:52.677 END TEST accel_decomp 00:06:52.677 ************************************ 00:06:52.677 00:32:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.677 00:32:03 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:52.677 00:32:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:52.677 00:32:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.677 00:32:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.677 ************************************ 00:06:52.677 START TEST accel_decomp_full 00:06:52.677 ************************************ 00:06:52.677 00:32:03 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:52.677 00:32:03 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:52.677 00:32:03 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:52.677 00:32:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.677 00:32:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.677 00:32:03 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:52.677 00:32:03 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:52.677 00:32:03 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:52.677 00:32:03 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.677 00:32:03 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.677 00:32:03 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.677 00:32:03 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.677 00:32:03 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.677 00:32:03 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:52.677 00:32:03 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:52.677 [2024-07-13 00:32:04.003957] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:52.677 [2024-07-13 00:32:04.004025] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214683 ] 00:06:52.677 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.677 [2024-07-13 00:32:04.054144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.677 [2024-07-13 00:32:04.094644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.677 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.677 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.677 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.677 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.677 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.677 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.677 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.677 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.677 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.677 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.677 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.677 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.678 00:32:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:54.057 00:32:05 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.057 00:06:54.057 real 0m1.303s 00:06:54.057 user 0m1.200s 00:06:54.057 sys 0m0.116s 00:06:54.057 00:32:05 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.057 00:32:05 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:54.057 ************************************ 00:06:54.057 END TEST accel_decomp_full 00:06:54.057 ************************************ 00:06:54.057 00:32:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.057 00:32:05 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.057 00:32:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:54.057 00:32:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.057 00:32:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.057 ************************************ 00:06:54.057 START TEST accel_decomp_mcore 00:06:54.057 ************************************ 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:54.057 [2024-07-13 00:32:05.368322] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:54.057 [2024-07-13 00:32:05.368382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214936 ] 00:06:54.057 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.057 [2024-07-13 00:32:05.439625] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.057 [2024-07-13 00:32:05.482160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.057 [2024-07-13 00:32:05.482270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.057 [2024-07-13 00:32:05.482314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.057 [2024-07-13 00:32:05.482315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.057 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.058 00:32:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.434 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.434 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.434 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.434 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.434 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.434 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.435 00:06:55.435 real 0m1.323s 00:06:55.435 user 0m4.532s 00:06:55.435 sys 0m0.134s 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.435 00:32:06 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:55.435 ************************************ 00:06:55.435 END TEST accel_decomp_mcore 00:06:55.435 ************************************ 00:06:55.435 00:32:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.435 00:32:06 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:55.435 00:32:06 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:55.435 00:32:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.435 00:32:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.435 ************************************ 00:06:55.435 START TEST accel_decomp_full_mcore 00:06:55.435 ************************************ 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:55.435 [2024-07-13 00:32:06.757036] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:55.435 [2024-07-13 00:32:06.757088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1215187 ] 00:06:55.435 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.435 [2024-07-13 00:32:06.824178] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.435 [2024-07-13 00:32:06.866731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.435 [2024-07-13 00:32:06.866840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.435 [2024-07-13 00:32:06.866925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.435 [2024-07-13 00:32:06.866926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.435 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.436 00:32:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.812 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.812 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.812 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.812 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.812 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.812 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.812 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.812 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.812 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.812 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.812 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.812 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.812 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.813 00:06:56.813 real 0m1.332s 00:06:56.813 user 0m4.580s 00:06:56.813 sys 0m0.130s 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.813 00:32:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:56.813 ************************************ 00:06:56.813 END TEST accel_decomp_full_mcore 00:06:56.813 ************************************ 00:06:56.813 00:32:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.813 00:32:08 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:56.813 00:32:08 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:56.813 00:32:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.813 00:32:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.813 ************************************ 00:06:56.813 START TEST accel_decomp_mthread 00:06:56.813 ************************************ 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:56.813 [2024-07-13 00:32:08.156314] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:56.813 [2024-07-13 00:32:08.156365] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1215437 ] 00:06:56.813 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.813 [2024-07-13 00:32:08.225843] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.813 [2024-07-13 00:32:08.268272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.813 00:32:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.189 00:06:58.189 real 0m1.318s 00:06:58.189 user 0m1.202s 00:06:58.189 sys 0m0.129s 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.189 00:32:09 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:58.189 ************************************ 00:06:58.189 END TEST accel_decomp_mthread 00:06:58.189 ************************************ 00:06:58.189 00:32:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.189 00:32:09 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:58.189 00:32:09 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:58.189 00:32:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.189 00:32:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.189 ************************************ 00:06:58.189 START TEST accel_decomp_full_mthread 00:06:58.189 ************************************ 00:06:58.189 00:32:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:58.189 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:58.189 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:58.189 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.189 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.189 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:58.189 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:58.189 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:58.189 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.189 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.189 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.189 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:58.190 [2024-07-13 00:32:09.537132] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:58.190 [2024-07-13 00:32:09.537181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1215688 ] 00:06:58.190 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.190 [2024-07-13 00:32:09.604062] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.190 [2024-07-13 00:32:09.643372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.190 00:32:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.568 00:06:59.568 real 0m1.333s 00:06:59.568 user 0m1.226s 00:06:59.568 sys 0m0.120s 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.568 00:32:10 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:59.568 ************************************ 00:06:59.568 END TEST accel_decomp_full_mthread 00:06:59.568 ************************************ 00:06:59.568 00:32:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.569 00:32:10 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:59.569 00:32:10 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:59.569 00:32:10 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:59.569 00:32:10 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:59.569 00:32:10 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.569 00:32:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.569 00:32:10 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.569 00:32:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.569 00:32:10 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.569 00:32:10 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.569 00:32:10 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.569 00:32:10 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:59.569 00:32:10 accel -- accel/accel.sh@41 -- # jq -r . 00:06:59.569 ************************************ 00:06:59.569 START TEST accel_dif_functional_tests 00:06:59.569 ************************************ 00:06:59.569 00:32:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:59.569 [2024-07-13 00:32:10.958373] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:59.569 [2024-07-13 00:32:10.958409] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1215939 ] 00:06:59.569 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.569 [2024-07-13 00:32:11.024539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.569 [2024-07-13 00:32:11.065682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.569 [2024-07-13 00:32:11.065791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.569 [2024-07-13 00:32:11.065792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.569 00:06:59.569 00:06:59.569 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.569 http://cunit.sourceforge.net/ 00:06:59.569 00:06:59.569 00:06:59.569 Suite: accel_dif 00:06:59.569 Test: verify: DIF generated, GUARD check ...passed 00:06:59.569 Test: verify: DIF generated, APPTAG check ...passed 00:06:59.569 Test: verify: DIF generated, REFTAG check ...passed 00:06:59.569 Test: verify: DIF not generated, GUARD check ...[2024-07-13 00:32:11.127693] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:59.569 passed 00:06:59.569 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 00:32:11.127740] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:59.569 passed 00:06:59.569 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 00:32:11.127759] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:59.569 passed 00:06:59.569 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:59.569 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 00:32:11.127802] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:59.569 passed 00:06:59.569 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:59.569 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:59.569 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:59.569 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 00:32:11.127896] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:59.569 passed 00:06:59.569 Test: verify copy: DIF generated, GUARD check ...passed 00:06:59.569 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:59.569 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:59.569 Test: verify copy: DIF not generated, GUARD check ...[2024-07-13 00:32:11.128001] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:59.569 passed 00:06:59.569 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 00:32:11.128021] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:59.569 passed 00:06:59.569 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-13 00:32:11.128042] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:59.569 passed 00:06:59.828 Test: generate copy: DIF generated, GUARD check ...passed 00:06:59.828 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:59.828 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:59.828 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:59.828 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:59.828 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:59.828 Test: generate copy: iovecs-len validate ...[2024-07-13 00:32:11.128208] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:59.828 passed 00:06:59.828 Test: generate copy: buffer alignment validate ...passed 00:06:59.828 00:06:59.828 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.828 suites 1 1 n/a 0 0 00:06:59.828 tests 26 26 26 0 0 00:06:59.828 asserts 115 115 115 0 n/a 00:06:59.828 00:06:59.828 Elapsed time = 0.002 seconds 00:06:59.828 00:06:59.828 real 0m0.371s 00:06:59.828 user 0m0.553s 00:06:59.828 sys 0m0.152s 00:06:59.828 00:32:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.828 00:32:11 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:59.828 ************************************ 00:06:59.828 END TEST accel_dif_functional_tests 00:06:59.828 ************************************ 00:06:59.828 00:32:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.828 00:06:59.828 real 0m30.270s 00:06:59.828 user 0m33.896s 00:06:59.828 sys 0m4.418s 00:06:59.828 00:32:11 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.828 00:32:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.828 ************************************ 00:06:59.828 END TEST accel 00:06:59.828 ************************************ 00:06:59.828 00:32:11 -- common/autotest_common.sh@1142 -- # return 0 00:06:59.828 00:32:11 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:59.828 00:32:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.828 00:32:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.828 00:32:11 -- common/autotest_common.sh@10 -- # set +x 00:07:00.089 ************************************ 00:07:00.089 START TEST accel_rpc 00:07:00.089 ************************************ 00:07:00.089 00:32:11 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:00.089 * Looking for test storage... 00:07:00.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:00.089 00:32:11 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:00.089 00:32:11 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1216013 00:07:00.089 00:32:11 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1216013 00:07:00.089 00:32:11 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:00.089 00:32:11 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1216013 ']' 00:07:00.089 00:32:11 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.089 00:32:11 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.089 00:32:11 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.089 00:32:11 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.089 00:32:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.089 [2024-07-13 00:32:11.532129] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:00.089 [2024-07-13 00:32:11.532176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216013 ] 00:07:00.089 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.089 [2024-07-13 00:32:11.599253] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.089 [2024-07-13 00:32:11.640132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.387 00:32:11 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.387 00:32:11 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:00.387 00:32:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:00.387 00:32:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:00.387 00:32:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:00.387 00:32:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:00.387 00:32:11 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:00.387 00:32:11 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.387 00:32:11 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.387 00:32:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 ************************************ 00:07:00.387 START TEST accel_assign_opcode 00:07:00.387 ************************************ 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 [2024-07-13 00:32:11.696578] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 [2024-07-13 00:32:11.704591] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.387 software 00:07:00.387 00:07:00.387 real 0m0.225s 00:07:00.387 user 0m0.044s 00:07:00.387 sys 0m0.009s 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.387 00:32:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:00.387 ************************************ 00:07:00.387 END TEST accel_assign_opcode 00:07:00.388 ************************************ 00:07:00.656 00:32:11 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:00.656 00:32:11 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1216013 00:07:00.656 00:32:11 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1216013 ']' 00:07:00.656 00:32:11 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1216013 00:07:00.656 00:32:11 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:00.656 00:32:11 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.656 00:32:11 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1216013 00:07:00.656 00:32:11 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.656 00:32:11 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.656 00:32:11 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1216013' 00:07:00.656 killing process with pid 1216013 00:07:00.656 00:32:11 accel_rpc -- common/autotest_common.sh@967 -- # kill 1216013 00:07:00.656 00:32:11 accel_rpc -- common/autotest_common.sh@972 -- # wait 1216013 00:07:00.915 00:07:00.915 real 0m0.901s 00:07:00.915 user 0m0.826s 00:07:00.915 sys 0m0.403s 00:07:00.915 00:32:12 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.915 00:32:12 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.915 ************************************ 00:07:00.915 END TEST accel_rpc 00:07:00.915 ************************************ 00:07:00.915 00:32:12 -- common/autotest_common.sh@1142 -- # return 0 00:07:00.916 00:32:12 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:00.916 00:32:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.916 00:32:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.916 00:32:12 -- common/autotest_common.sh@10 -- # set +x 00:07:00.916 ************************************ 00:07:00.916 START TEST app_cmdline 00:07:00.916 ************************************ 00:07:00.916 00:32:12 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:00.916 * Looking for test storage... 00:07:00.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:00.916 00:32:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:00.916 00:32:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1216313 00:07:00.916 00:32:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1216313 00:07:00.916 00:32:12 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:00.916 00:32:12 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1216313 ']' 00:07:00.916 00:32:12 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.916 00:32:12 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.916 00:32:12 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.916 00:32:12 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.916 00:32:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.175 [2024-07-13 00:32:12.502155] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:01.175 [2024-07-13 00:32:12.502201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216313 ] 00:07:01.175 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.175 [2024-07-13 00:32:12.556070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.175 [2024-07-13 00:32:12.595667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.433 00:32:12 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.433 00:32:12 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:01.433 00:32:12 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:01.433 { 00:07:01.433 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:07:01.433 "fields": { 00:07:01.433 "major": 24, 00:07:01.433 "minor": 9, 00:07:01.433 "patch": 0, 00:07:01.433 "suffix": "-pre", 00:07:01.433 "commit": "719d03c6a" 00:07:01.433 } 00:07:01.433 } 00:07:01.433 00:32:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:01.433 00:32:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:01.433 00:32:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:01.433 00:32:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:01.434 00:32:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:01.434 00:32:12 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.434 00:32:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.434 00:32:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:01.434 00:32:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:01.434 00:32:12 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.692 00:32:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:01.692 00:32:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:01.692 00:32:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.692 request: 00:07:01.692 { 00:07:01.692 "method": "env_dpdk_get_mem_stats", 00:07:01.692 "req_id": 1 00:07:01.692 } 00:07:01.692 Got JSON-RPC error response 00:07:01.692 response: 00:07:01.692 { 00:07:01.692 "code": -32601, 00:07:01.692 "message": "Method not found" 00:07:01.692 } 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.692 00:32:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1216313 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1216313 ']' 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1216313 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1216313 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1216313' 00:07:01.692 killing process with pid 1216313 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@967 -- # kill 1216313 00:07:01.692 00:32:13 app_cmdline -- common/autotest_common.sh@972 -- # wait 1216313 00:07:02.259 00:07:02.259 real 0m1.190s 00:07:02.259 user 0m1.387s 00:07:02.259 sys 0m0.418s 00:07:02.259 00:32:13 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.259 00:32:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.259 ************************************ 00:07:02.259 END TEST app_cmdline 00:07:02.259 ************************************ 00:07:02.259 00:32:13 -- common/autotest_common.sh@1142 -- # return 0 00:07:02.259 00:32:13 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:02.259 00:32:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.259 00:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.259 00:32:13 -- common/autotest_common.sh@10 -- # set +x 00:07:02.259 ************************************ 00:07:02.259 START TEST version 00:07:02.259 ************************************ 00:07:02.259 00:32:13 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:02.259 * Looking for test storage... 00:07:02.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:02.259 00:32:13 version -- app/version.sh@17 -- # get_header_version major 00:07:02.259 00:32:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.259 00:32:13 version -- app/version.sh@14 -- # cut -f2 00:07:02.259 00:32:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.259 00:32:13 version -- app/version.sh@17 -- # major=24 00:07:02.259 00:32:13 version -- app/version.sh@18 -- # get_header_version minor 00:07:02.259 00:32:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.259 00:32:13 version -- app/version.sh@14 -- # cut -f2 00:07:02.259 00:32:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.259 00:32:13 version -- app/version.sh@18 -- # minor=9 00:07:02.259 00:32:13 version -- app/version.sh@19 -- # get_header_version patch 00:07:02.259 00:32:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.259 00:32:13 version -- app/version.sh@14 -- # cut -f2 00:07:02.259 00:32:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.259 00:32:13 version -- app/version.sh@19 -- # patch=0 00:07:02.259 00:32:13 version -- app/version.sh@20 -- # get_header_version suffix 00:07:02.259 00:32:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.259 00:32:13 version -- app/version.sh@14 -- # cut -f2 00:07:02.259 00:32:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:02.259 00:32:13 version -- app/version.sh@20 -- # suffix=-pre 00:07:02.259 00:32:13 version -- app/version.sh@22 -- # version=24.9 00:07:02.259 00:32:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:02.259 00:32:13 version -- app/version.sh@28 -- # version=24.9rc0 00:07:02.259 00:32:13 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:02.259 00:32:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:02.259 00:32:13 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:02.259 00:32:13 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:02.259 00:07:02.259 real 0m0.154s 00:07:02.259 user 0m0.087s 00:07:02.259 sys 0m0.103s 00:07:02.259 00:32:13 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.259 00:32:13 version -- common/autotest_common.sh@10 -- # set +x 00:07:02.259 ************************************ 00:07:02.259 END TEST version 00:07:02.259 ************************************ 00:07:02.259 00:32:13 -- common/autotest_common.sh@1142 -- # return 0 00:07:02.259 00:32:13 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:02.259 00:32:13 -- spdk/autotest.sh@198 -- # uname -s 00:07:02.259 00:32:13 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:02.259 00:32:13 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:02.259 00:32:13 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:02.259 00:32:13 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:02.259 00:32:13 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:02.259 00:32:13 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:02.260 00:32:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:02.260 00:32:13 -- common/autotest_common.sh@10 -- # set +x 00:07:02.519 00:32:13 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:02.519 00:32:13 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:02.519 00:32:13 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:02.519 00:32:13 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:02.519 00:32:13 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:02.519 00:32:13 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:02.519 00:32:13 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:02.519 00:32:13 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:02.519 00:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.519 00:32:13 -- common/autotest_common.sh@10 -- # set +x 00:07:02.519 ************************************ 00:07:02.519 START TEST nvmf_tcp 00:07:02.519 ************************************ 00:07:02.519 00:32:13 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:02.519 * Looking for test storage... 00:07:02.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.519 00:32:13 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.519 00:32:13 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.519 00:32:13 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.519 00:32:13 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.519 00:32:13 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.519 00:32:13 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.519 00:32:13 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:02.519 00:32:13 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.519 00:32:13 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.519 00:32:14 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.519 00:32:14 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:02.519 00:32:14 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:02.519 00:32:14 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:02.519 00:32:14 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:02.519 00:32:14 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:02.519 00:32:14 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:02.519 00:32:14 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:02.519 00:32:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:02.519 00:32:14 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:02.519 00:32:14 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:02.519 00:32:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:02.519 00:32:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.519 00:32:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:02.519 ************************************ 00:07:02.520 START TEST nvmf_example 00:07:02.520 ************************************ 00:07:02.520 00:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:02.778 * Looking for test storage... 00:07:02.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:02.778 00:32:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:02.779 00:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:09.345 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:09.345 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:09.345 Found net devices under 0000:86:00.0: cvl_0_0 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:09.345 Found net devices under 0000:86:00.1: cvl_0_1 00:07:09.345 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:09.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:07:09.346 00:07:09.346 --- 10.0.0.2 ping statistics --- 00:07:09.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.346 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:09.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:07:09.346 00:07:09.346 --- 10.0.0.1 ping statistics --- 00:07:09.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.346 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:09.346 00:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1219704 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1219704 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1219704 ']' 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.346 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.346 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:09.605 00:32:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:09.605 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.585 Initializing NVMe Controllers 00:07:19.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:19.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:19.585 Initialization complete. Launching workers. 00:07:19.585 ======================================================== 00:07:19.585 Latency(us) 00:07:19.585 Device Information : IOPS MiB/s Average min max 00:07:19.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18344.89 71.66 3488.50 521.68 15450.44 00:07:19.585 ======================================================== 00:07:19.585 Total : 18344.89 71.66 3488.50 521.68 15450.44 00:07:19.585 00:07:19.585 00:32:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:19.585 00:32:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:19.585 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:19.585 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:19.585 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:19.585 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:19.585 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:19.585 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:19.585 rmmod nvme_tcp 00:07:19.845 rmmod nvme_fabrics 00:07:19.845 rmmod nvme_keyring 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1219704 ']' 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1219704 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1219704 ']' 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1219704 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1219704 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1219704' 00:07:19.845 killing process with pid 1219704 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1219704 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1219704 00:07:19.845 nvmf threads initialize successfully 00:07:19.845 bdev subsystem init successfully 00:07:19.845 created a nvmf target service 00:07:19.845 create targets's poll groups done 00:07:19.845 all subsystems of target started 00:07:19.845 nvmf target is running 00:07:19.845 all subsystems of target stopped 00:07:19.845 destroy targets's poll groups done 00:07:19.845 destroyed the nvmf target service 00:07:19.845 bdev subsystem finish successfully 00:07:19.845 nvmf threads destroy successfully 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.845 00:32:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.104 00:32:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.009 00:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:22.009 00:32:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:22.009 00:32:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.009 00:32:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.009 00:07:22.009 real 0m19.465s 00:07:22.009 user 0m45.630s 00:07:22.009 sys 0m5.841s 00:07:22.009 00:32:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.009 00:32:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.009 ************************************ 00:07:22.009 END TEST nvmf_example 00:07:22.009 ************************************ 00:07:22.009 00:32:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:22.009 00:32:33 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:22.009 00:32:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.009 00:32:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.009 00:32:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.271 ************************************ 00:07:22.271 START TEST nvmf_filesystem 00:07:22.271 ************************************ 00:07:22.271 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:22.271 * Looking for test storage... 00:07:22.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.271 00:32:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:22.271 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:22.271 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:22.271 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:22.271 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:22.271 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:22.271 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:22.271 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:22.271 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:22.271 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:22.271 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:22.271 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:22.272 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:22.272 #define SPDK_CONFIG_H 00:07:22.272 #define SPDK_CONFIG_APPS 1 00:07:22.273 #define SPDK_CONFIG_ARCH native 00:07:22.273 #undef SPDK_CONFIG_ASAN 00:07:22.273 #undef SPDK_CONFIG_AVAHI 00:07:22.273 #undef SPDK_CONFIG_CET 00:07:22.273 #define SPDK_CONFIG_COVERAGE 1 00:07:22.273 #define SPDK_CONFIG_CROSS_PREFIX 00:07:22.273 #undef SPDK_CONFIG_CRYPTO 00:07:22.273 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:22.273 #undef SPDK_CONFIG_CUSTOMOCF 00:07:22.273 #undef SPDK_CONFIG_DAOS 00:07:22.273 #define SPDK_CONFIG_DAOS_DIR 00:07:22.273 #define SPDK_CONFIG_DEBUG 1 00:07:22.273 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:22.273 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:22.273 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:22.273 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:22.273 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:22.273 #undef SPDK_CONFIG_DPDK_UADK 00:07:22.273 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:22.273 #define SPDK_CONFIG_EXAMPLES 1 00:07:22.273 #undef SPDK_CONFIG_FC 00:07:22.273 #define SPDK_CONFIG_FC_PATH 00:07:22.273 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:22.273 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:22.273 #undef SPDK_CONFIG_FUSE 00:07:22.273 #undef SPDK_CONFIG_FUZZER 00:07:22.273 #define SPDK_CONFIG_FUZZER_LIB 00:07:22.273 #undef SPDK_CONFIG_GOLANG 00:07:22.273 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:22.273 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:22.273 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:22.273 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:22.273 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:22.273 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:22.273 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:22.273 #define SPDK_CONFIG_IDXD 1 00:07:22.273 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:22.273 #undef SPDK_CONFIG_IPSEC_MB 00:07:22.273 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:22.273 #define SPDK_CONFIG_ISAL 1 00:07:22.273 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:22.273 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:22.273 #define SPDK_CONFIG_LIBDIR 00:07:22.273 #undef SPDK_CONFIG_LTO 00:07:22.273 #define SPDK_CONFIG_MAX_LCORES 128 00:07:22.273 #define SPDK_CONFIG_NVME_CUSE 1 00:07:22.273 #undef SPDK_CONFIG_OCF 00:07:22.273 #define SPDK_CONFIG_OCF_PATH 00:07:22.273 #define SPDK_CONFIG_OPENSSL_PATH 00:07:22.273 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:22.273 #define SPDK_CONFIG_PGO_DIR 00:07:22.273 #undef SPDK_CONFIG_PGO_USE 00:07:22.273 #define SPDK_CONFIG_PREFIX /usr/local 00:07:22.273 #undef SPDK_CONFIG_RAID5F 00:07:22.273 #undef SPDK_CONFIG_RBD 00:07:22.273 #define SPDK_CONFIG_RDMA 1 00:07:22.273 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:22.273 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:22.273 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:22.273 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:22.273 #define SPDK_CONFIG_SHARED 1 00:07:22.273 #undef SPDK_CONFIG_SMA 00:07:22.273 #define SPDK_CONFIG_TESTS 1 00:07:22.273 #undef SPDK_CONFIG_TSAN 00:07:22.273 #define SPDK_CONFIG_UBLK 1 00:07:22.273 #define SPDK_CONFIG_UBSAN 1 00:07:22.273 #undef SPDK_CONFIG_UNIT_TESTS 00:07:22.273 #undef SPDK_CONFIG_URING 00:07:22.273 #define SPDK_CONFIG_URING_PATH 00:07:22.273 #undef SPDK_CONFIG_URING_ZNS 00:07:22.273 #undef SPDK_CONFIG_USDT 00:07:22.273 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:22.273 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:22.273 #define SPDK_CONFIG_VFIO_USER 1 00:07:22.273 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:22.273 #define SPDK_CONFIG_VHOST 1 00:07:22.273 #define SPDK_CONFIG_VIRTIO 1 00:07:22.273 #undef SPDK_CONFIG_VTUNE 00:07:22.273 #define SPDK_CONFIG_VTUNE_DIR 00:07:22.273 #define SPDK_CONFIG_WERROR 1 00:07:22.273 #define SPDK_CONFIG_WPDK_DIR 00:07:22.273 #undef SPDK_CONFIG_XNVME 00:07:22.273 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:22.273 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v23.11 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:22.274 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:22.275 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1222112 ]] 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1222112 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.xdK9hZ 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.xdK9hZ/tests/target /tmp/spdk.xdK9hZ 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=188780888064 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974299648 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=7193411584 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97983774720 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986789376 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=360448 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:22.276 * Looking for test storage... 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:22.276 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=188780888064 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=9408004096 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.277 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:22.537 00:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:29.110 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:29.110 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:29.110 Found net devices under 0000:86:00.0: cvl_0_0 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:29.110 Found net devices under 0000:86:00.1: cvl_0_1 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:29.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:07:29.110 00:07:29.110 --- 10.0.0.2 ping statistics --- 00:07:29.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.110 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:07:29.110 00:07:29.110 --- 10.0.0.1 ping statistics --- 00:07:29.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.110 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.110 00:32:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.110 ************************************ 00:07:29.111 START TEST nvmf_filesystem_no_in_capsule 00:07:29.111 ************************************ 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1225166 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1225166 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1225166 ']' 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.111 [2024-07-13 00:32:39.784404] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:29.111 [2024-07-13 00:32:39.784450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.111 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.111 [2024-07-13 00:32:39.856071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.111 [2024-07-13 00:32:39.901112] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.111 [2024-07-13 00:32:39.901153] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.111 [2024-07-13 00:32:39.901161] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.111 [2024-07-13 00:32:39.901167] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.111 [2024-07-13 00:32:39.901172] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.111 [2024-07-13 00:32:39.901217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.111 [2024-07-13 00:32:39.901330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.111 [2024-07-13 00:32:39.901364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.111 [2024-07-13 00:32:39.901365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.111 00:32:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.111 [2024-07-13 00:32:40.044349] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.111 Malloc1 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.111 [2024-07-13 00:32:40.199905] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:29.111 { 00:07:29.111 "name": "Malloc1", 00:07:29.111 "aliases": [ 00:07:29.111 "f0643e6a-b7b8-4e24-bc6a-68f5b3c19af0" 00:07:29.111 ], 00:07:29.111 "product_name": "Malloc disk", 00:07:29.111 "block_size": 512, 00:07:29.111 "num_blocks": 1048576, 00:07:29.111 "uuid": "f0643e6a-b7b8-4e24-bc6a-68f5b3c19af0", 00:07:29.111 "assigned_rate_limits": { 00:07:29.111 "rw_ios_per_sec": 0, 00:07:29.111 "rw_mbytes_per_sec": 0, 00:07:29.111 "r_mbytes_per_sec": 0, 00:07:29.111 "w_mbytes_per_sec": 0 00:07:29.111 }, 00:07:29.111 "claimed": true, 00:07:29.111 "claim_type": "exclusive_write", 00:07:29.111 "zoned": false, 00:07:29.111 "supported_io_types": { 00:07:29.111 "read": true, 00:07:29.111 "write": true, 00:07:29.111 "unmap": true, 00:07:29.111 "flush": true, 00:07:29.111 "reset": true, 00:07:29.111 "nvme_admin": false, 00:07:29.111 "nvme_io": false, 00:07:29.111 "nvme_io_md": false, 00:07:29.111 "write_zeroes": true, 00:07:29.111 "zcopy": true, 00:07:29.111 "get_zone_info": false, 00:07:29.111 "zone_management": false, 00:07:29.111 "zone_append": false, 00:07:29.111 "compare": false, 00:07:29.111 "compare_and_write": false, 00:07:29.111 "abort": true, 00:07:29.111 "seek_hole": false, 00:07:29.111 "seek_data": false, 00:07:29.111 "copy": true, 00:07:29.111 "nvme_iov_md": false 00:07:29.111 }, 00:07:29.111 "memory_domains": [ 00:07:29.111 { 00:07:29.111 "dma_device_id": "system", 00:07:29.111 "dma_device_type": 1 00:07:29.111 }, 00:07:29.111 { 00:07:29.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.111 "dma_device_type": 2 00:07:29.111 } 00:07:29.111 ], 00:07:29.111 "driver_specific": {} 00:07:29.111 } 00:07:29.111 ]' 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:29.111 00:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:30.049 00:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:30.049 00:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:30.049 00:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:30.049 00:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:30.049 00:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:31.948 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:32.206 00:32:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:32.773 00:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:34.148 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:34.148 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:34.148 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:34.148 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.148 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.148 ************************************ 00:07:34.148 START TEST filesystem_ext4 00:07:34.148 ************************************ 00:07:34.148 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:34.148 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:34.148 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:34.149 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:34.149 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:34.149 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:34.149 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:34.149 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:34.149 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:34.149 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:34.149 00:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:34.149 mke2fs 1.46.5 (30-Dec-2021) 00:07:34.149 Discarding device blocks: 0/522240 done 00:07:34.149 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:34.149 Filesystem UUID: b33a030a-1826-4631-b8eb-cb950ec3ea1a 00:07:34.149 Superblock backups stored on blocks: 00:07:34.149 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:34.149 00:07:34.149 Allocating group tables: 0/64 done 00:07:34.149 Writing inode tables: 0/64 done 00:07:34.149 Creating journal (8192 blocks): done 00:07:34.976 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:07:34.976 00:07:34.976 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:34.976 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:35.631 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:35.631 00:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1225166 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:35.631 00:07:35.631 real 0m1.747s 00:07:35.631 user 0m0.032s 00:07:35.631 sys 0m0.056s 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:35.631 ************************************ 00:07:35.631 END TEST filesystem_ext4 00:07:35.631 ************************************ 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.631 ************************************ 00:07:35.631 START TEST filesystem_btrfs 00:07:35.631 ************************************ 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:35.631 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:35.890 btrfs-progs v6.6.2 00:07:35.890 See https://btrfs.readthedocs.io for more information. 00:07:35.890 00:07:35.890 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:35.890 NOTE: several default settings have changed in version 5.15, please make sure 00:07:35.890 this does not affect your deployments: 00:07:35.890 - DUP for metadata (-m dup) 00:07:35.890 - enabled no-holes (-O no-holes) 00:07:35.890 - enabled free-space-tree (-R free-space-tree) 00:07:35.890 00:07:35.890 Label: (null) 00:07:35.890 UUID: 6ed38525-d691-458e-a387-60319faa044c 00:07:35.890 Node size: 16384 00:07:35.890 Sector size: 4096 00:07:35.890 Filesystem size: 510.00MiB 00:07:35.890 Block group profiles: 00:07:35.890 Data: single 8.00MiB 00:07:35.890 Metadata: DUP 32.00MiB 00:07:35.890 System: DUP 8.00MiB 00:07:35.890 SSD detected: yes 00:07:35.890 Zoned device: no 00:07:35.890 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:35.890 Runtime features: free-space-tree 00:07:35.890 Checksum: crc32c 00:07:35.890 Number of devices: 1 00:07:35.890 Devices: 00:07:35.890 ID SIZE PATH 00:07:35.890 1 510.00MiB /dev/nvme0n1p1 00:07:35.890 00:07:35.890 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:35.890 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1225166 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:36.459 00:07:36.459 real 0m0.718s 00:07:36.459 user 0m0.030s 00:07:36.459 sys 0m0.123s 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:36.459 ************************************ 00:07:36.459 END TEST filesystem_btrfs 00:07:36.459 ************************************ 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.459 ************************************ 00:07:36.459 START TEST filesystem_xfs 00:07:36.459 ************************************ 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:36.459 00:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:36.459 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:36.459 = sectsz=512 attr=2, projid32bit=1 00:07:36.459 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:36.459 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:36.459 data = bsize=4096 blocks=130560, imaxpct=25 00:07:36.459 = sunit=0 swidth=0 blks 00:07:36.459 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:36.459 log =internal log bsize=4096 blocks=16384, version=2 00:07:36.459 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:36.459 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:37.397 Discarding blocks...Done. 00:07:37.397 00:32:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:37.397 00:32:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:39.932 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:39.932 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:39.932 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:39.932 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:39.932 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:39.932 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:39.932 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1225166 00:07:39.932 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:39.932 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:39.932 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:39.932 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:39.932 00:07:39.932 real 0m3.281s 00:07:39.932 user 0m0.029s 00:07:39.932 sys 0m0.066s 00:07:39.932 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.932 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:39.932 ************************************ 00:07:39.932 END TEST filesystem_xfs 00:07:39.932 ************************************ 00:07:39.932 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:39.932 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:40.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1225166 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1225166 ']' 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1225166 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1225166 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1225166' 00:07:40.191 killing process with pid 1225166 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1225166 00:07:40.191 00:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1225166 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:40.760 00:07:40.760 real 0m12.298s 00:07:40.760 user 0m48.257s 00:07:40.760 sys 0m1.180s 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.760 ************************************ 00:07:40.760 END TEST nvmf_filesystem_no_in_capsule 00:07:40.760 ************************************ 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.760 ************************************ 00:07:40.760 START TEST nvmf_filesystem_in_capsule 00:07:40.760 ************************************ 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1227445 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1227445 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1227445 ']' 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.760 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.760 [2024-07-13 00:32:52.154380] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:40.760 [2024-07-13 00:32:52.154421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.761 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.761 [2024-07-13 00:32:52.209327] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.761 [2024-07-13 00:32:52.251542] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.761 [2024-07-13 00:32:52.251582] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.761 [2024-07-13 00:32:52.251589] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.761 [2024-07-13 00:32:52.251595] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.761 [2024-07-13 00:32:52.251600] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.761 [2024-07-13 00:32:52.254244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.761 [2024-07-13 00:32:52.254277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.761 [2024-07-13 00:32:52.254382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.761 [2024-07-13 00:32:52.254383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.021 [2024-07-13 00:32:52.403410] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.021 Malloc1 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.021 [2024-07-13 00:32:52.547817] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:41.021 { 00:07:41.021 "name": "Malloc1", 00:07:41.021 "aliases": [ 00:07:41.021 "d3a34bd1-54e1-4330-9ab3-6730f988ac6a" 00:07:41.021 ], 00:07:41.021 "product_name": "Malloc disk", 00:07:41.021 "block_size": 512, 00:07:41.021 "num_blocks": 1048576, 00:07:41.021 "uuid": "d3a34bd1-54e1-4330-9ab3-6730f988ac6a", 00:07:41.021 "assigned_rate_limits": { 00:07:41.021 "rw_ios_per_sec": 0, 00:07:41.021 "rw_mbytes_per_sec": 0, 00:07:41.021 "r_mbytes_per_sec": 0, 00:07:41.021 "w_mbytes_per_sec": 0 00:07:41.021 }, 00:07:41.021 "claimed": true, 00:07:41.021 "claim_type": "exclusive_write", 00:07:41.021 "zoned": false, 00:07:41.021 "supported_io_types": { 00:07:41.021 "read": true, 00:07:41.021 "write": true, 00:07:41.021 "unmap": true, 00:07:41.021 "flush": true, 00:07:41.021 "reset": true, 00:07:41.021 "nvme_admin": false, 00:07:41.021 "nvme_io": false, 00:07:41.021 "nvme_io_md": false, 00:07:41.021 "write_zeroes": true, 00:07:41.021 "zcopy": true, 00:07:41.021 "get_zone_info": false, 00:07:41.021 "zone_management": false, 00:07:41.021 "zone_append": false, 00:07:41.021 "compare": false, 00:07:41.021 "compare_and_write": false, 00:07:41.021 "abort": true, 00:07:41.021 "seek_hole": false, 00:07:41.021 "seek_data": false, 00:07:41.021 "copy": true, 00:07:41.021 "nvme_iov_md": false 00:07:41.021 }, 00:07:41.021 "memory_domains": [ 00:07:41.021 { 00:07:41.021 "dma_device_id": "system", 00:07:41.021 "dma_device_type": 1 00:07:41.021 }, 00:07:41.021 { 00:07:41.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.021 "dma_device_type": 2 00:07:41.021 } 00:07:41.021 ], 00:07:41.021 "driver_specific": {} 00:07:41.021 } 00:07:41.021 ]' 00:07:41.021 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:41.281 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:41.281 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:41.281 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:41.281 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:41.281 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:41.281 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:41.281 00:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:42.233 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:42.233 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:42.233 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:42.233 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:42.233 00:32:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:44.768 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:44.768 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:44.768 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:44.768 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:44.768 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:44.768 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:44.768 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:44.769 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:44.769 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:44.769 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:44.769 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:44.769 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:44.769 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:44.769 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:44.769 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:44.769 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:44.769 00:32:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:44.769 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:44.769 00:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:45.708 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:45.708 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:45.708 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:45.708 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.708 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.967 ************************************ 00:07:45.967 START TEST filesystem_in_capsule_ext4 00:07:45.967 ************************************ 00:07:45.967 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:45.967 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:45.967 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:45.967 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:45.967 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:45.967 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:45.967 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:45.967 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:45.967 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:45.967 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:45.967 00:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:45.967 mke2fs 1.46.5 (30-Dec-2021) 00:07:45.967 Discarding device blocks: 0/522240 done 00:07:45.967 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:45.967 Filesystem UUID: a75dea35-d079-445b-9aa7-724b33bc674a 00:07:45.967 Superblock backups stored on blocks: 00:07:45.967 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:45.967 00:07:45.967 Allocating group tables: 0/64 done 00:07:45.967 Writing inode tables: 0/64 done 00:07:46.225 Creating journal (8192 blocks): done 00:07:47.078 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:07:47.078 00:07:47.078 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:47.078 00:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1227445 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:48.015 00:07:48.015 real 0m2.166s 00:07:48.015 user 0m0.025s 00:07:48.015 sys 0m0.063s 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:48.015 ************************************ 00:07:48.015 END TEST filesystem_in_capsule_ext4 00:07:48.015 ************************************ 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.015 ************************************ 00:07:48.015 START TEST filesystem_in_capsule_btrfs 00:07:48.015 ************************************ 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:48.015 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:48.275 btrfs-progs v6.6.2 00:07:48.275 See https://btrfs.readthedocs.io for more information. 00:07:48.275 00:07:48.275 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:48.275 NOTE: several default settings have changed in version 5.15, please make sure 00:07:48.275 this does not affect your deployments: 00:07:48.275 - DUP for metadata (-m dup) 00:07:48.275 - enabled no-holes (-O no-holes) 00:07:48.275 - enabled free-space-tree (-R free-space-tree) 00:07:48.275 00:07:48.275 Label: (null) 00:07:48.275 UUID: 9f9e14db-4dd4-4137-a1cb-7f970834be11 00:07:48.275 Node size: 16384 00:07:48.275 Sector size: 4096 00:07:48.275 Filesystem size: 510.00MiB 00:07:48.275 Block group profiles: 00:07:48.275 Data: single 8.00MiB 00:07:48.275 Metadata: DUP 32.00MiB 00:07:48.275 System: DUP 8.00MiB 00:07:48.275 SSD detected: yes 00:07:48.275 Zoned device: no 00:07:48.275 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:48.275 Runtime features: free-space-tree 00:07:48.275 Checksum: crc32c 00:07:48.275 Number of devices: 1 00:07:48.275 Devices: 00:07:48.275 ID SIZE PATH 00:07:48.275 1 510.00MiB /dev/nvme0n1p1 00:07:48.275 00:07:48.275 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:48.275 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:48.534 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:48.534 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:48.534 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:48.534 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:48.534 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:48.534 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:48.534 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1227445 00:07:48.534 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:48.534 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:48.534 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:48.534 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:48.534 00:07:48.534 real 0m0.475s 00:07:48.534 user 0m0.026s 00:07:48.534 sys 0m0.125s 00:07:48.534 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.534 00:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:48.534 ************************************ 00:07:48.534 END TEST filesystem_in_capsule_btrfs 00:07:48.534 ************************************ 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.534 ************************************ 00:07:48.534 START TEST filesystem_in_capsule_xfs 00:07:48.534 ************************************ 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:48.534 00:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:48.793 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:48.793 = sectsz=512 attr=2, projid32bit=1 00:07:48.793 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:48.793 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:48.793 data = bsize=4096 blocks=130560, imaxpct=25 00:07:48.793 = sunit=0 swidth=0 blks 00:07:48.793 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:48.793 log =internal log bsize=4096 blocks=16384, version=2 00:07:48.793 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:48.793 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:49.731 Discarding blocks...Done. 00:07:49.731 00:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:49.731 00:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1227445 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:52.265 00:07:52.265 real 0m3.656s 00:07:52.265 user 0m0.026s 00:07:52.265 sys 0m0.070s 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:52.265 ************************************ 00:07:52.265 END TEST filesystem_in_capsule_xfs 00:07:52.265 ************************************ 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:52.265 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:52.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1227445 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1227445 ']' 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1227445 00:07:52.524 00:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:52.524 00:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.524 00:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1227445 00:07:52.524 00:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:52.524 00:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:52.524 00:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1227445' 00:07:52.524 killing process with pid 1227445 00:07:52.524 00:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1227445 00:07:52.524 00:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1227445 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:53.092 00:07:53.092 real 0m12.275s 00:07:53.092 user 0m48.217s 00:07:53.092 sys 0m1.208s 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.092 ************************************ 00:07:53.092 END TEST nvmf_filesystem_in_capsule 00:07:53.092 ************************************ 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:53.092 rmmod nvme_tcp 00:07:53.092 rmmod nvme_fabrics 00:07:53.092 rmmod nvme_keyring 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.092 00:33:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.995 00:33:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:54.995 00:07:54.995 real 0m32.953s 00:07:54.995 user 1m38.325s 00:07:54.995 sys 0m6.927s 00:07:54.995 00:33:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.995 00:33:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.995 ************************************ 00:07:54.995 END TEST nvmf_filesystem 00:07:54.995 ************************************ 00:07:55.254 00:33:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:55.254 00:33:06 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:55.254 00:33:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:55.254 00:33:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.254 00:33:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:55.254 ************************************ 00:07:55.254 START TEST nvmf_target_discovery 00:07:55.254 ************************************ 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:55.254 * Looking for test storage... 00:07:55.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.254 00:33:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:55.255 00:33:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.850 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:01.851 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:01.851 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:01.851 Found net devices under 0000:86:00.0: cvl_0_0 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:01.851 Found net devices under 0000:86:00.1: cvl_0_1 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:01.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:08:01.851 00:08:01.851 --- 10.0.0.2 ping statistics --- 00:08:01.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.851 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:08:01.851 00:08:01.851 --- 10.0.0.1 ping statistics --- 00:08:01.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.851 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1233762 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1233762 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1233762 ']' 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:01.851 00:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.851 [2024-07-13 00:33:12.638669] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:01.851 [2024-07-13 00:33:12.638711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.851 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.851 [2024-07-13 00:33:12.711069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.851 [2024-07-13 00:33:12.751559] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.851 [2024-07-13 00:33:12.751603] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.851 [2024-07-13 00:33:12.751610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.851 [2024-07-13 00:33:12.751617] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.851 [2024-07-13 00:33:12.751622] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.851 [2024-07-13 00:33:12.751692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.851 [2024-07-13 00:33:12.751818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.851 [2024-07-13 00:33:12.751905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.851 [2024-07-13 00:33:12.751906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 [2024-07-13 00:33:13.487126] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 Null1 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 [2024-07-13 00:33:13.532594] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 Null2 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 Null3 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 Null4 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.110 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.111 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.111 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:02.111 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.111 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.111 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.111 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:08:02.371 00:08:02.371 Discovery Log Number of Records 6, Generation counter 6 00:08:02.371 =====Discovery Log Entry 0====== 00:08:02.371 trtype: tcp 00:08:02.371 adrfam: ipv4 00:08:02.371 subtype: current discovery subsystem 00:08:02.371 treq: not required 00:08:02.371 portid: 0 00:08:02.371 trsvcid: 4420 00:08:02.371 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:02.371 traddr: 10.0.0.2 00:08:02.371 eflags: explicit discovery connections, duplicate discovery information 00:08:02.371 sectype: none 00:08:02.371 =====Discovery Log Entry 1====== 00:08:02.371 trtype: tcp 00:08:02.371 adrfam: ipv4 00:08:02.371 subtype: nvme subsystem 00:08:02.371 treq: not required 00:08:02.371 portid: 0 00:08:02.371 trsvcid: 4420 00:08:02.371 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:02.371 traddr: 10.0.0.2 00:08:02.371 eflags: none 00:08:02.371 sectype: none 00:08:02.371 =====Discovery Log Entry 2====== 00:08:02.371 trtype: tcp 00:08:02.371 adrfam: ipv4 00:08:02.371 subtype: nvme subsystem 00:08:02.371 treq: not required 00:08:02.371 portid: 0 00:08:02.371 trsvcid: 4420 00:08:02.371 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:02.371 traddr: 10.0.0.2 00:08:02.371 eflags: none 00:08:02.371 sectype: none 00:08:02.371 =====Discovery Log Entry 3====== 00:08:02.371 trtype: tcp 00:08:02.371 adrfam: ipv4 00:08:02.371 subtype: nvme subsystem 00:08:02.371 treq: not required 00:08:02.371 portid: 0 00:08:02.371 trsvcid: 4420 00:08:02.371 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:02.371 traddr: 10.0.0.2 00:08:02.371 eflags: none 00:08:02.371 sectype: none 00:08:02.371 =====Discovery Log Entry 4====== 00:08:02.371 trtype: tcp 00:08:02.371 adrfam: ipv4 00:08:02.371 subtype: nvme subsystem 00:08:02.371 treq: not required 00:08:02.371 portid: 0 00:08:02.371 trsvcid: 4420 00:08:02.371 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:02.371 traddr: 10.0.0.2 00:08:02.371 eflags: none 00:08:02.371 sectype: none 00:08:02.371 =====Discovery Log Entry 5====== 00:08:02.371 trtype: tcp 00:08:02.371 adrfam: ipv4 00:08:02.371 subtype: discovery subsystem referral 00:08:02.371 treq: not required 00:08:02.371 portid: 0 00:08:02.371 trsvcid: 4430 00:08:02.371 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:02.371 traddr: 10.0.0.2 00:08:02.371 eflags: none 00:08:02.371 sectype: none 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:02.371 Perform nvmf subsystem discovery via RPC 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.371 [ 00:08:02.371 { 00:08:02.371 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:02.371 "subtype": "Discovery", 00:08:02.371 "listen_addresses": [ 00:08:02.371 { 00:08:02.371 "trtype": "TCP", 00:08:02.371 "adrfam": "IPv4", 00:08:02.371 "traddr": "10.0.0.2", 00:08:02.371 "trsvcid": "4420" 00:08:02.371 } 00:08:02.371 ], 00:08:02.371 "allow_any_host": true, 00:08:02.371 "hosts": [] 00:08:02.371 }, 00:08:02.371 { 00:08:02.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:02.371 "subtype": "NVMe", 00:08:02.371 "listen_addresses": [ 00:08:02.371 { 00:08:02.371 "trtype": "TCP", 00:08:02.371 "adrfam": "IPv4", 00:08:02.371 "traddr": "10.0.0.2", 00:08:02.371 "trsvcid": "4420" 00:08:02.371 } 00:08:02.371 ], 00:08:02.371 "allow_any_host": true, 00:08:02.371 "hosts": [], 00:08:02.371 "serial_number": "SPDK00000000000001", 00:08:02.371 "model_number": "SPDK bdev Controller", 00:08:02.371 "max_namespaces": 32, 00:08:02.371 "min_cntlid": 1, 00:08:02.371 "max_cntlid": 65519, 00:08:02.371 "namespaces": [ 00:08:02.371 { 00:08:02.371 "nsid": 1, 00:08:02.371 "bdev_name": "Null1", 00:08:02.371 "name": "Null1", 00:08:02.371 "nguid": "5BB4EB9364474D34A4675B9ADDE248C5", 00:08:02.371 "uuid": "5bb4eb93-6447-4d34-a467-5b9adde248c5" 00:08:02.371 } 00:08:02.371 ] 00:08:02.371 }, 00:08:02.371 { 00:08:02.371 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:02.371 "subtype": "NVMe", 00:08:02.371 "listen_addresses": [ 00:08:02.371 { 00:08:02.371 "trtype": "TCP", 00:08:02.371 "adrfam": "IPv4", 00:08:02.371 "traddr": "10.0.0.2", 00:08:02.371 "trsvcid": "4420" 00:08:02.371 } 00:08:02.371 ], 00:08:02.371 "allow_any_host": true, 00:08:02.371 "hosts": [], 00:08:02.371 "serial_number": "SPDK00000000000002", 00:08:02.371 "model_number": "SPDK bdev Controller", 00:08:02.371 "max_namespaces": 32, 00:08:02.371 "min_cntlid": 1, 00:08:02.371 "max_cntlid": 65519, 00:08:02.371 "namespaces": [ 00:08:02.371 { 00:08:02.371 "nsid": 1, 00:08:02.371 "bdev_name": "Null2", 00:08:02.371 "name": "Null2", 00:08:02.371 "nguid": "B5594007FE1944329BA7AE2108DAE790", 00:08:02.371 "uuid": "b5594007-fe19-4432-9ba7-ae2108dae790" 00:08:02.371 } 00:08:02.371 ] 00:08:02.371 }, 00:08:02.371 { 00:08:02.371 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:02.371 "subtype": "NVMe", 00:08:02.371 "listen_addresses": [ 00:08:02.371 { 00:08:02.371 "trtype": "TCP", 00:08:02.371 "adrfam": "IPv4", 00:08:02.371 "traddr": "10.0.0.2", 00:08:02.371 "trsvcid": "4420" 00:08:02.371 } 00:08:02.371 ], 00:08:02.371 "allow_any_host": true, 00:08:02.371 "hosts": [], 00:08:02.371 "serial_number": "SPDK00000000000003", 00:08:02.371 "model_number": "SPDK bdev Controller", 00:08:02.371 "max_namespaces": 32, 00:08:02.371 "min_cntlid": 1, 00:08:02.371 "max_cntlid": 65519, 00:08:02.371 "namespaces": [ 00:08:02.371 { 00:08:02.371 "nsid": 1, 00:08:02.371 "bdev_name": "Null3", 00:08:02.371 "name": "Null3", 00:08:02.371 "nguid": "9125F1893BB141DEA8648D9CAC4B4D00", 00:08:02.371 "uuid": "9125f189-3bb1-41de-a864-8d9cac4b4d00" 00:08:02.371 } 00:08:02.371 ] 00:08:02.371 }, 00:08:02.371 { 00:08:02.371 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:02.371 "subtype": "NVMe", 00:08:02.371 "listen_addresses": [ 00:08:02.371 { 00:08:02.371 "trtype": "TCP", 00:08:02.371 "adrfam": "IPv4", 00:08:02.371 "traddr": "10.0.0.2", 00:08:02.371 "trsvcid": "4420" 00:08:02.371 } 00:08:02.371 ], 00:08:02.371 "allow_any_host": true, 00:08:02.371 "hosts": [], 00:08:02.371 "serial_number": "SPDK00000000000004", 00:08:02.371 "model_number": "SPDK bdev Controller", 00:08:02.371 "max_namespaces": 32, 00:08:02.371 "min_cntlid": 1, 00:08:02.371 "max_cntlid": 65519, 00:08:02.371 "namespaces": [ 00:08:02.371 { 00:08:02.371 "nsid": 1, 00:08:02.371 "bdev_name": "Null4", 00:08:02.371 "name": "Null4", 00:08:02.371 "nguid": "3BD68CA621894AA2A01913F5B4F97A70", 00:08:02.371 "uuid": "3bd68ca6-2189-4aa2-a019-13f5b4f97a70" 00:08:02.371 } 00:08:02.371 ] 00:08:02.371 } 00:08:02.371 ] 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.371 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.372 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.372 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.372 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:02.372 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.372 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.670 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.670 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:02.670 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.670 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.670 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.670 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.670 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:02.670 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.670 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.670 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.670 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:02.670 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.670 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.671 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.671 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:02.671 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.671 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.671 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.671 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:02.671 00:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:02.671 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.671 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.671 00:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:02.671 rmmod nvme_tcp 00:08:02.671 rmmod nvme_fabrics 00:08:02.671 rmmod nvme_keyring 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1233762 ']' 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1233762 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1233762 ']' 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1233762 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1233762 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1233762' 00:08:02.671 killing process with pid 1233762 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1233762 00:08:02.671 00:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1233762 00:08:02.941 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:02.941 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:02.941 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:02.941 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:02.941 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:02.941 00:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.941 00:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.941 00:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.847 00:33:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:04.847 00:08:04.847 real 0m9.754s 00:08:04.847 user 0m7.982s 00:08:04.847 sys 0m4.728s 00:08:04.847 00:33:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.847 00:33:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.847 ************************************ 00:08:04.847 END TEST nvmf_target_discovery 00:08:04.847 ************************************ 00:08:04.847 00:33:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:04.847 00:33:16 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:04.847 00:33:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:04.847 00:33:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.847 00:33:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:05.108 ************************************ 00:08:05.108 START TEST nvmf_referrals 00:08:05.108 ************************************ 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:05.108 * Looking for test storage... 00:08:05.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:05.108 00:33:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.686 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.686 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:11.686 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:11.686 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:11.686 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:11.686 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:11.687 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:11.687 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:11.687 Found net devices under 0000:86:00.0: cvl_0_0 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:11.687 Found net devices under 0000:86:00.1: cvl_0_1 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:11.687 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:11.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:08:11.688 00:08:11.688 --- 10.0.0.2 ping statistics --- 00:08:11.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.688 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:08:11.688 00:08:11.688 --- 10.0.0.1 ping statistics --- 00:08:11.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.688 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1237544 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1237544 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1237544 ']' 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.688 [2024-07-13 00:33:22.456759] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:11.688 [2024-07-13 00:33:22.456807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.688 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.688 [2024-07-13 00:33:22.526629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.688 [2024-07-13 00:33:22.569319] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.688 [2024-07-13 00:33:22.569356] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.688 [2024-07-13 00:33:22.569363] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.688 [2024-07-13 00:33:22.569369] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.688 [2024-07-13 00:33:22.569374] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.688 [2024-07-13 00:33:22.569419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.688 [2024-07-13 00:33:22.569511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.688 [2024-07-13 00:33:22.569619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.688 [2024-07-13 00:33:22.569620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.688 [2024-07-13 00:33:22.709186] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.688 [2024-07-13 00:33:22.722570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:11.688 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.689 00:33:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.689 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:11.690 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:11.690 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:11.690 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:11.690 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:11.690 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:11.690 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.690 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:11.948 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:11.948 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:11.948 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:11.948 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:11.948 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:11.948 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.948 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.206 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:12.464 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:12.464 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:12.464 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:12.464 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:12.464 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:12.464 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.464 00:33:23 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:12.464 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:12.464 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:12.464 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:12.464 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.723 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:12.982 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:12.982 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:12.982 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:12.982 00:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:12.983 rmmod nvme_tcp 00:08:12.983 rmmod nvme_fabrics 00:08:12.983 rmmod nvme_keyring 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1237544 ']' 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1237544 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1237544 ']' 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1237544 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1237544 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1237544' 00:08:12.983 killing process with pid 1237544 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1237544 00:08:12.983 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1237544 00:08:13.242 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:13.242 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:13.242 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:13.242 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:13.242 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:13.242 00:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.242 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.242 00:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.149 00:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:15.149 00:08:15.149 real 0m10.242s 00:08:15.149 user 0m10.351s 00:08:15.149 sys 0m5.089s 00:08:15.149 00:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.149 00:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:15.149 ************************************ 00:08:15.149 END TEST nvmf_referrals 00:08:15.149 ************************************ 00:08:15.149 00:33:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:15.149 00:33:26 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:15.149 00:33:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:15.149 00:33:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.409 00:33:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:15.409 ************************************ 00:08:15.409 START TEST nvmf_connect_disconnect 00:08:15.409 ************************************ 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:15.409 * Looking for test storage... 00:08:15.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.409 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:15.410 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:15.410 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:15.410 00:33:26 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:21.990 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:21.990 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:21.990 Found net devices under 0000:86:00.0: cvl_0_0 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:21.990 Found net devices under 0000:86:00.1: cvl_0_1 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.990 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:08:21.991 00:08:21.991 --- 10.0.0.2 ping statistics --- 00:08:21.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.991 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:08:21.991 00:08:21.991 --- 10.0.0.1 ping statistics --- 00:08:21.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.991 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1241403 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1241403 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1241403 ']' 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.991 00:33:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.991 [2024-07-13 00:33:32.739336] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:21.991 [2024-07-13 00:33:32.739377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.991 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.991 [2024-07-13 00:33:32.810032] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.991 [2024-07-13 00:33:32.850314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.991 [2024-07-13 00:33:32.850354] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.991 [2024-07-13 00:33:32.850361] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.991 [2024-07-13 00:33:32.850367] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.991 [2024-07-13 00:33:32.850371] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.991 [2024-07-13 00:33:32.850485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.991 [2024-07-13 00:33:32.850692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.991 [2024-07-13 00:33:32.850608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.991 [2024-07-13 00:33:32.850694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:22.250 [2024-07-13 00:33:33.598276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:22.250 [2024-07-13 00:33:33.650211] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:22.250 00:33:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:24.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.635 00:37:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:11.635 00:37:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:11.635 00:37:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:11.635 00:37:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:11.635 00:37:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:11.635 00:37:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:11.635 00:37:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:11.635 00:37:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:11.635 rmmod nvme_tcp 00:12:11.635 rmmod nvme_fabrics 00:12:11.635 rmmod nvme_keyring 00:12:11.635 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:11.635 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:11.635 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:11.635 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1241403 ']' 00:12:11.635 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1241403 00:12:11.635 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1241403 ']' 00:12:11.635 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1241403 00:12:11.635 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:11.635 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:11.635 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1241403 00:12:11.635 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:11.635 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:11.635 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1241403' 00:12:11.635 killing process with pid 1241403 00:12:11.635 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1241403 00:12:11.635 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1241403 00:12:11.894 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:11.894 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:11.894 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:11.894 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.894 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.894 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.894 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.894 00:37:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.799 00:37:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:13.799 00:12:13.799 real 3m58.592s 00:12:13.799 user 15m14.687s 00:12:13.799 sys 0m20.290s 00:12:13.799 00:37:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:13.799 00:37:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:13.799 ************************************ 00:12:13.799 END TEST nvmf_connect_disconnect 00:12:13.799 ************************************ 00:12:14.057 00:37:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:14.057 00:37:25 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:14.057 00:37:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:14.057 00:37:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.057 00:37:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:14.057 ************************************ 00:12:14.057 START TEST nvmf_multitarget 00:12:14.057 ************************************ 00:12:14.057 00:37:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:14.057 * Looking for test storage... 00:12:14.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.057 00:37:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.057 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:14.057 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.057 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.057 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.057 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.057 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.057 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.057 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.057 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.057 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.057 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:14.058 00:37:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:20.626 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.626 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:20.626 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:20.626 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:20.626 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:20.626 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:20.626 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:20.626 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:20.626 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:20.626 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:20.626 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:20.626 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:20.627 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:20.627 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:20.627 Found net devices under 0000:86:00.0: cvl_0_0 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:20.627 Found net devices under 0000:86:00.1: cvl_0_1 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:20.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:12:20.627 00:12:20.627 --- 10.0.0.2 ping statistics --- 00:12:20.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.627 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:12:20.627 00:12:20.627 --- 10.0.0.1 ping statistics --- 00:12:20.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.627 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1284735 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1284735 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1284735 ']' 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.627 00:37:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:20.628 00:37:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:20.628 [2024-07-13 00:37:31.373343] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:20.628 [2024-07-13 00:37:31.373390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.628 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.628 [2024-07-13 00:37:31.444955] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.628 [2024-07-13 00:37:31.486902] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.628 [2024-07-13 00:37:31.486941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.628 [2024-07-13 00:37:31.486948] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.628 [2024-07-13 00:37:31.486954] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.628 [2024-07-13 00:37:31.486960] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.628 [2024-07-13 00:37:31.487005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.628 [2024-07-13 00:37:31.487117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.628 [2024-07-13 00:37:31.487228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.628 [2024-07-13 00:37:31.487238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.628 00:37:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:20.628 00:37:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:20.628 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:20.628 00:37:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:20.628 00:37:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:20.628 00:37:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.628 00:37:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:20.628 00:37:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:20.628 00:37:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:20.628 00:37:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:20.628 00:37:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:20.628 "nvmf_tgt_1" 00:12:20.628 00:37:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:20.628 "nvmf_tgt_2" 00:12:20.628 00:37:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:20.628 00:37:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:20.628 00:37:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:20.628 00:37:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:20.628 true 00:12:20.628 00:37:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:20.887 true 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:20.887 rmmod nvme_tcp 00:12:20.887 rmmod nvme_fabrics 00:12:20.887 rmmod nvme_keyring 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1284735 ']' 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1284735 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1284735 ']' 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1284735 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1284735 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1284735' 00:12:20.887 killing process with pid 1284735 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1284735 00:12:20.887 00:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1284735 00:12:21.146 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:21.146 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:21.146 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:21.146 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.146 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:21.146 00:37:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.146 00:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.146 00:37:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.682 00:37:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:23.682 00:12:23.682 real 0m9.265s 00:12:23.682 user 0m6.629s 00:12:23.682 sys 0m4.804s 00:12:23.682 00:37:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:23.682 00:37:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:23.682 ************************************ 00:12:23.682 END TEST nvmf_multitarget 00:12:23.682 ************************************ 00:12:23.682 00:37:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:23.682 00:37:34 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:23.682 00:37:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:23.682 00:37:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.682 00:37:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:23.682 ************************************ 00:12:23.682 START TEST nvmf_rpc 00:12:23.682 ************************************ 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:23.682 * Looking for test storage... 00:12:23.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.682 00:37:34 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:23.683 00:37:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.992 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:28.993 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:28.993 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:28.993 Found net devices under 0000:86:00.0: cvl_0_0 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:28.993 Found net devices under 0000:86:00.1: cvl_0_1 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:28.993 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.252 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.252 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.252 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:29.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:12:29.252 00:12:29.252 --- 10.0.0.2 ping statistics --- 00:12:29.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.252 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:12:29.252 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:12:29.252 00:12:29.252 --- 10.0.0.1 ping statistics --- 00:12:29.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.252 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:12:29.252 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.252 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:29.252 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:29.252 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1288302 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1288302 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1288302 ']' 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.253 00:37:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.253 [2024-07-13 00:37:40.701485] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:29.253 [2024-07-13 00:37:40.701530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.253 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.253 [2024-07-13 00:37:40.773837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.512 [2024-07-13 00:37:40.815976] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.512 [2024-07-13 00:37:40.816016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.512 [2024-07-13 00:37:40.816023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.512 [2024-07-13 00:37:40.816029] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.512 [2024-07-13 00:37:40.816034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.512 [2024-07-13 00:37:40.816110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.512 [2024-07-13 00:37:40.816282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.512 [2024-07-13 00:37:40.816320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.512 [2024-07-13 00:37:40.816322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:30.078 "tick_rate": 2300000000, 00:12:30.078 "poll_groups": [ 00:12:30.078 { 00:12:30.078 "name": "nvmf_tgt_poll_group_000", 00:12:30.078 "admin_qpairs": 0, 00:12:30.078 "io_qpairs": 0, 00:12:30.078 "current_admin_qpairs": 0, 00:12:30.078 "current_io_qpairs": 0, 00:12:30.078 "pending_bdev_io": 0, 00:12:30.078 "completed_nvme_io": 0, 00:12:30.078 "transports": [] 00:12:30.078 }, 00:12:30.078 { 00:12:30.078 "name": "nvmf_tgt_poll_group_001", 00:12:30.078 "admin_qpairs": 0, 00:12:30.078 "io_qpairs": 0, 00:12:30.078 "current_admin_qpairs": 0, 00:12:30.078 "current_io_qpairs": 0, 00:12:30.078 "pending_bdev_io": 0, 00:12:30.078 "completed_nvme_io": 0, 00:12:30.078 "transports": [] 00:12:30.078 }, 00:12:30.078 { 00:12:30.078 "name": "nvmf_tgt_poll_group_002", 00:12:30.078 "admin_qpairs": 0, 00:12:30.078 "io_qpairs": 0, 00:12:30.078 "current_admin_qpairs": 0, 00:12:30.078 "current_io_qpairs": 0, 00:12:30.078 "pending_bdev_io": 0, 00:12:30.078 "completed_nvme_io": 0, 00:12:30.078 "transports": [] 00:12:30.078 }, 00:12:30.078 { 00:12:30.078 "name": "nvmf_tgt_poll_group_003", 00:12:30.078 "admin_qpairs": 0, 00:12:30.078 "io_qpairs": 0, 00:12:30.078 "current_admin_qpairs": 0, 00:12:30.078 "current_io_qpairs": 0, 00:12:30.078 "pending_bdev_io": 0, 00:12:30.078 "completed_nvme_io": 0, 00:12:30.078 "transports": [] 00:12:30.078 } 00:12:30.078 ] 00:12:30.078 }' 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:30.078 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.337 [2024-07-13 00:37:41.654491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:30.337 "tick_rate": 2300000000, 00:12:30.337 "poll_groups": [ 00:12:30.337 { 00:12:30.337 "name": "nvmf_tgt_poll_group_000", 00:12:30.337 "admin_qpairs": 0, 00:12:30.337 "io_qpairs": 0, 00:12:30.337 "current_admin_qpairs": 0, 00:12:30.337 "current_io_qpairs": 0, 00:12:30.337 "pending_bdev_io": 0, 00:12:30.337 "completed_nvme_io": 0, 00:12:30.337 "transports": [ 00:12:30.337 { 00:12:30.337 "trtype": "TCP" 00:12:30.337 } 00:12:30.337 ] 00:12:30.337 }, 00:12:30.337 { 00:12:30.337 "name": "nvmf_tgt_poll_group_001", 00:12:30.337 "admin_qpairs": 0, 00:12:30.337 "io_qpairs": 0, 00:12:30.337 "current_admin_qpairs": 0, 00:12:30.337 "current_io_qpairs": 0, 00:12:30.337 "pending_bdev_io": 0, 00:12:30.337 "completed_nvme_io": 0, 00:12:30.337 "transports": [ 00:12:30.337 { 00:12:30.337 "trtype": "TCP" 00:12:30.337 } 00:12:30.337 ] 00:12:30.337 }, 00:12:30.337 { 00:12:30.337 "name": "nvmf_tgt_poll_group_002", 00:12:30.337 "admin_qpairs": 0, 00:12:30.337 "io_qpairs": 0, 00:12:30.337 "current_admin_qpairs": 0, 00:12:30.337 "current_io_qpairs": 0, 00:12:30.337 "pending_bdev_io": 0, 00:12:30.337 "completed_nvme_io": 0, 00:12:30.337 "transports": [ 00:12:30.337 { 00:12:30.337 "trtype": "TCP" 00:12:30.337 } 00:12:30.337 ] 00:12:30.337 }, 00:12:30.337 { 00:12:30.337 "name": "nvmf_tgt_poll_group_003", 00:12:30.337 "admin_qpairs": 0, 00:12:30.337 "io_qpairs": 0, 00:12:30.337 "current_admin_qpairs": 0, 00:12:30.337 "current_io_qpairs": 0, 00:12:30.337 "pending_bdev_io": 0, 00:12:30.337 "completed_nvme_io": 0, 00:12:30.337 "transports": [ 00:12:30.337 { 00:12:30.337 "trtype": "TCP" 00:12:30.337 } 00:12:30.337 ] 00:12:30.337 } 00:12:30.337 ] 00:12:30.337 }' 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.337 Malloc1 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.337 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.338 [2024-07-13 00:37:41.822637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:30.338 [2024-07-13 00:37:41.851173] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:30.338 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:30.338 could not add new controller: failed to write to nvme-fabrics device 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.338 00:37:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.715 00:37:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.715 00:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:31.715 00:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.715 00:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:31.715 00:37:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:33.621 00:37:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:33.621 00:37:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:33.621 00:37:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.621 00:37:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:33.621 00:37:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.621 00:37:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:33.621 00:37:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.621 00:37:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.621 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:33.621 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:33.621 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.880 [2024-07-13 00:37:45.232917] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:33.880 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:33.880 could not add new controller: failed to write to nvme-fabrics device 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:33.880 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:33.881 00:37:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:33.881 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.881 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.881 00:37:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.881 00:37:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.817 00:37:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.817 00:37:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:34.817 00:37:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.817 00:37:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:34.817 00:37:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:37.351 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:37.351 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:37.351 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.351 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:37.351 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.351 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:37.351 00:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.351 00:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.351 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:37.351 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:37.351 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.351 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:37.351 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.352 [2024-07-13 00:37:48.591261] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.352 00:37:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.288 00:37:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.288 00:37:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:38.288 00:37:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.288 00:37:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:38.288 00:37:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:40.191 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:40.191 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:40.191 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.191 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:40.191 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.191 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:40.191 00:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.450 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.451 [2024-07-13 00:37:51.884437] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.451 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.451 00:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.451 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.451 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.451 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.451 00:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.451 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.451 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.451 00:37:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.451 00:37:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.829 00:37:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.829 00:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:41.829 00:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.829 00:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:41.829 00:37:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.733 [2024-07-13 00:37:55.209924] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.733 00:37:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.108 00:37:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.108 00:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:45.108 00:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.108 00:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:45.108 00:37:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:47.012 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:47.012 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.013 [2024-07-13 00:37:58.452483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.013 00:37:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.390 00:37:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.390 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:48.390 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.390 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:48.390 00:37:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.294 [2024-07-13 00:38:01.782964] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.294 00:38:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:50.295 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.295 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.295 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.295 00:38:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:50.295 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.295 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.295 00:38:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.295 00:38:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.706 00:38:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.706 00:38:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:51.706 00:38:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.706 00:38:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:51.706 00:38:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:53.605 00:38:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:53.605 00:38:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:53.605 00:38:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.605 00:38:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:53.605 00:38:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.605 00:38:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:53.605 00:38:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.605 [2024-07-13 00:38:05.079220] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.605 [2024-07-13 00:38:05.127351] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.605 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 [2024-07-13 00:38:05.179500] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 [2024-07-13 00:38:05.227677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 [2024-07-13 00:38:05.275818] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:53.864 "tick_rate": 2300000000, 00:12:53.864 "poll_groups": [ 00:12:53.864 { 00:12:53.864 "name": "nvmf_tgt_poll_group_000", 00:12:53.864 "admin_qpairs": 2, 00:12:53.864 "io_qpairs": 168, 00:12:53.864 "current_admin_qpairs": 0, 00:12:53.864 "current_io_qpairs": 0, 00:12:53.864 "pending_bdev_io": 0, 00:12:53.864 "completed_nvme_io": 266, 00:12:53.864 "transports": [ 00:12:53.864 { 00:12:53.864 "trtype": "TCP" 00:12:53.864 } 00:12:53.864 ] 00:12:53.864 }, 00:12:53.864 { 00:12:53.864 "name": "nvmf_tgt_poll_group_001", 00:12:53.864 "admin_qpairs": 2, 00:12:53.864 "io_qpairs": 168, 00:12:53.864 "current_admin_qpairs": 0, 00:12:53.864 "current_io_qpairs": 0, 00:12:53.864 "pending_bdev_io": 0, 00:12:53.864 "completed_nvme_io": 270, 00:12:53.864 "transports": [ 00:12:53.864 { 00:12:53.864 "trtype": "TCP" 00:12:53.864 } 00:12:53.864 ] 00:12:53.864 }, 00:12:53.864 { 00:12:53.864 "name": "nvmf_tgt_poll_group_002", 00:12:53.864 "admin_qpairs": 1, 00:12:53.864 "io_qpairs": 168, 00:12:53.864 "current_admin_qpairs": 0, 00:12:53.864 "current_io_qpairs": 0, 00:12:53.864 "pending_bdev_io": 0, 00:12:53.864 "completed_nvme_io": 219, 00:12:53.864 "transports": [ 00:12:53.864 { 00:12:53.864 "trtype": "TCP" 00:12:53.864 } 00:12:53.864 ] 00:12:53.864 }, 00:12:53.864 { 00:12:53.864 "name": "nvmf_tgt_poll_group_003", 00:12:53.864 "admin_qpairs": 2, 00:12:53.864 "io_qpairs": 168, 00:12:53.864 "current_admin_qpairs": 0, 00:12:53.864 "current_io_qpairs": 0, 00:12:53.864 "pending_bdev_io": 0, 00:12:53.864 "completed_nvme_io": 267, 00:12:53.864 "transports": [ 00:12:53.864 { 00:12:53.864 "trtype": "TCP" 00:12:53.864 } 00:12:53.864 ] 00:12:53.864 } 00:12:53.864 ] 00:12:53.864 }' 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:53.864 00:38:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:54.123 rmmod nvme_tcp 00:12:54.123 rmmod nvme_fabrics 00:12:54.123 rmmod nvme_keyring 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1288302 ']' 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1288302 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1288302 ']' 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1288302 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:54.123 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1288302 00:12:54.124 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:54.124 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:54.124 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1288302' 00:12:54.124 killing process with pid 1288302 00:12:54.124 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1288302 00:12:54.124 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1288302 00:12:54.383 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:54.383 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:54.383 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:54.383 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:54.383 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:54.383 00:38:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.383 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.383 00:38:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.290 00:38:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:56.290 00:12:56.290 real 0m33.054s 00:12:56.290 user 1m40.945s 00:12:56.290 sys 0m6.187s 00:12:56.290 00:38:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:56.290 00:38:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.290 ************************************ 00:12:56.290 END TEST nvmf_rpc 00:12:56.290 ************************************ 00:12:56.290 00:38:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:56.290 00:38:07 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:56.290 00:38:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:56.290 00:38:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.290 00:38:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:56.551 ************************************ 00:12:56.551 START TEST nvmf_invalid 00:12:56.551 ************************************ 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:56.551 * Looking for test storage... 00:12:56.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:56.551 00:38:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:03.128 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:03.128 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:03.128 Found net devices under 0000:86:00.0: cvl_0_0 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:03.128 Found net devices under 0000:86:00.1: cvl_0_1 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:03.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:13:03.128 00:13:03.128 --- 10.0.0.2 ping statistics --- 00:13:03.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.128 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:13:03.128 00:13:03.128 --- 10.0.0.1 ping statistics --- 00:13:03.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.128 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.128 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1296119 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1296119 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1296119 ']' 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.129 00:38:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.129 [2024-07-13 00:38:13.809653] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:03.129 [2024-07-13 00:38:13.809695] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.129 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.129 [2024-07-13 00:38:13.881331] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.129 [2024-07-13 00:38:13.923982] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.129 [2024-07-13 00:38:13.924018] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.129 [2024-07-13 00:38:13.924025] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.129 [2024-07-13 00:38:13.924030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.129 [2024-07-13 00:38:13.924036] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.129 [2024-07-13 00:38:13.924101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.129 [2024-07-13 00:38:13.924243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.129 [2024-07-13 00:38:13.924334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.129 [2024-07-13 00:38:13.924334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.129 00:38:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:03.129 00:38:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:13:03.129 00:38:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.129 00:38:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:03.129 00:38:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.129 00:38:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.129 00:38:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:03.129 00:38:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21721 00:13:03.388 [2024-07-13 00:38:14.818645] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:03.388 00:38:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:03.388 { 00:13:03.388 "nqn": "nqn.2016-06.io.spdk:cnode21721", 00:13:03.388 "tgt_name": "foobar", 00:13:03.388 "method": "nvmf_create_subsystem", 00:13:03.388 "req_id": 1 00:13:03.388 } 00:13:03.388 Got JSON-RPC error response 00:13:03.388 response: 00:13:03.388 { 00:13:03.388 "code": -32603, 00:13:03.388 "message": "Unable to find target foobar" 00:13:03.388 }' 00:13:03.388 00:38:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:03.389 { 00:13:03.389 "nqn": "nqn.2016-06.io.spdk:cnode21721", 00:13:03.389 "tgt_name": "foobar", 00:13:03.389 "method": "nvmf_create_subsystem", 00:13:03.389 "req_id": 1 00:13:03.389 } 00:13:03.389 Got JSON-RPC error response 00:13:03.389 response: 00:13:03.389 { 00:13:03.389 "code": -32603, 00:13:03.389 "message": "Unable to find target foobar" 00:13:03.389 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:03.389 00:38:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:03.389 00:38:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode10412 00:13:03.648 [2024-07-13 00:38:15.015337] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10412: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:03.648 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:03.648 { 00:13:03.648 "nqn": "nqn.2016-06.io.spdk:cnode10412", 00:13:03.648 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:03.648 "method": "nvmf_create_subsystem", 00:13:03.648 "req_id": 1 00:13:03.648 } 00:13:03.648 Got JSON-RPC error response 00:13:03.648 response: 00:13:03.648 { 00:13:03.648 "code": -32602, 00:13:03.648 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:03.648 }' 00:13:03.648 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:03.648 { 00:13:03.648 "nqn": "nqn.2016-06.io.spdk:cnode10412", 00:13:03.648 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:03.648 "method": "nvmf_create_subsystem", 00:13:03.648 "req_id": 1 00:13:03.648 } 00:13:03.648 Got JSON-RPC error response 00:13:03.648 response: 00:13:03.648 { 00:13:03.648 "code": -32602, 00:13:03.648 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:03.648 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:03.648 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:03.648 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20646 00:13:03.648 [2024-07-13 00:38:15.199960] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20646: invalid model number 'SPDK_Controller' 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:03.908 { 00:13:03.908 "nqn": "nqn.2016-06.io.spdk:cnode20646", 00:13:03.908 "model_number": "SPDK_Controller\u001f", 00:13:03.908 "method": "nvmf_create_subsystem", 00:13:03.908 "req_id": 1 00:13:03.908 } 00:13:03.908 Got JSON-RPC error response 00:13:03.908 response: 00:13:03.908 { 00:13:03.908 "code": -32602, 00:13:03.908 "message": "Invalid MN SPDK_Controller\u001f" 00:13:03.908 }' 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:03.908 { 00:13:03.908 "nqn": "nqn.2016-06.io.spdk:cnode20646", 00:13:03.908 "model_number": "SPDK_Controller\u001f", 00:13:03.908 "method": "nvmf_create_subsystem", 00:13:03.908 "req_id": 1 00:13:03.908 } 00:13:03.908 Got JSON-RPC error response 00:13:03.908 response: 00:13:03.908 { 00:13:03.908 "code": -32602, 00:13:03.908 "message": "Invalid MN SPDK_Controller\u001f" 00:13:03.908 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.908 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'v`o2pM!Ju_V\M' 00:13:03.909 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'v`o2pM!Ju_V\M' nqn.2016-06.io.spdk:cnode22389 00:13:04.169 [2024-07-13 00:38:15.512983] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22389: invalid serial number 'v`o2pM!Ju_V\M' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:04.169 { 00:13:04.169 "nqn": "nqn.2016-06.io.spdk:cnode22389", 00:13:04.169 "serial_number": "v`o2pM!Ju_V\\M", 00:13:04.169 "method": "nvmf_create_subsystem", 00:13:04.169 "req_id": 1 00:13:04.169 } 00:13:04.169 Got JSON-RPC error response 00:13:04.169 response: 00:13:04.169 { 00:13:04.169 "code": -32602, 00:13:04.169 "message": "Invalid SN v`o2pM!Ju_V\\M" 00:13:04.169 }' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:04.169 { 00:13:04.169 "nqn": "nqn.2016-06.io.spdk:cnode22389", 00:13:04.169 "serial_number": "v`o2pM!Ju_V\\M", 00:13:04.169 "method": "nvmf_create_subsystem", 00:13:04.169 "req_id": 1 00:13:04.169 } 00:13:04.169 Got JSON-RPC error response 00:13:04.169 response: 00:13:04.169 { 00:13:04.169 "code": -32602, 00:13:04.169 "message": "Invalid SN v`o2pM!Ju_V\\M" 00:13:04.169 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:04.169 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.170 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ~ == \- ]] 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '~+n<*P6.C{9.A)6-Z&VJ5JS_7RD0:h^Xli*ky.%4a' 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '~+n<*P6.C{9.A)6-Z&VJ5JS_7RD0:h^Xli*ky.%4a' nqn.2016-06.io.spdk:cnode19065 00:13:04.430 [2024-07-13 00:38:15.946484] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19065: invalid model number '~+n<*P6.C{9.A)6-Z&VJ5JS_7RD0:h^Xli*ky.%4a' 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:04.430 { 00:13:04.430 "nqn": "nqn.2016-06.io.spdk:cnode19065", 00:13:04.430 "model_number": "~+n<*P6.C{9.A)6-Z&VJ5JS_7RD0:h^Xli*ky.%4a", 00:13:04.430 "method": "nvmf_create_subsystem", 00:13:04.430 "req_id": 1 00:13:04.430 } 00:13:04.430 Got JSON-RPC error response 00:13:04.430 response: 00:13:04.430 { 00:13:04.430 "code": -32602, 00:13:04.430 "message": "Invalid MN ~+n<*P6.C{9.A)6-Z&VJ5JS_7RD0:h^Xli*ky.%4a" 00:13:04.430 }' 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:04.430 { 00:13:04.430 "nqn": "nqn.2016-06.io.spdk:cnode19065", 00:13:04.430 "model_number": "~+n<*P6.C{9.A)6-Z&VJ5JS_7RD0:h^Xli*ky.%4a", 00:13:04.430 "method": "nvmf_create_subsystem", 00:13:04.430 "req_id": 1 00:13:04.430 } 00:13:04.430 Got JSON-RPC error response 00:13:04.430 response: 00:13:04.430 { 00:13:04.430 "code": -32602, 00:13:04.430 "message": "Invalid MN ~+n<*P6.C{9.A)6-Z&VJ5JS_7RD0:h^Xli*ky.%4a" 00:13:04.430 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:04.430 00:38:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:04.689 [2024-07-13 00:38:16.139237] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.689 00:38:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:04.948 00:38:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:04.948 00:38:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:04.948 00:38:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:04.948 00:38:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:04.948 00:38:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:05.207 [2024-07-13 00:38:16.524533] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:05.207 00:38:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:05.207 { 00:13:05.207 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:05.207 "listen_address": { 00:13:05.207 "trtype": "tcp", 00:13:05.207 "traddr": "", 00:13:05.207 "trsvcid": "4421" 00:13:05.207 }, 00:13:05.207 "method": "nvmf_subsystem_remove_listener", 00:13:05.207 "req_id": 1 00:13:05.207 } 00:13:05.207 Got JSON-RPC error response 00:13:05.207 response: 00:13:05.207 { 00:13:05.207 "code": -32602, 00:13:05.207 "message": "Invalid parameters" 00:13:05.207 }' 00:13:05.207 00:38:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:05.207 { 00:13:05.207 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:05.207 "listen_address": { 00:13:05.207 "trtype": "tcp", 00:13:05.207 "traddr": "", 00:13:05.207 "trsvcid": "4421" 00:13:05.207 }, 00:13:05.207 "method": "nvmf_subsystem_remove_listener", 00:13:05.207 "req_id": 1 00:13:05.207 } 00:13:05.207 Got JSON-RPC error response 00:13:05.207 response: 00:13:05.207 { 00:13:05.207 "code": -32602, 00:13:05.207 "message": "Invalid parameters" 00:13:05.207 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:05.207 00:38:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3454 -i 0 00:13:05.207 [2024-07-13 00:38:16.709128] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3454: invalid cntlid range [0-65519] 00:13:05.207 00:38:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:05.207 { 00:13:05.207 "nqn": "nqn.2016-06.io.spdk:cnode3454", 00:13:05.207 "min_cntlid": 0, 00:13:05.207 "method": "nvmf_create_subsystem", 00:13:05.207 "req_id": 1 00:13:05.207 } 00:13:05.207 Got JSON-RPC error response 00:13:05.207 response: 00:13:05.207 { 00:13:05.207 "code": -32602, 00:13:05.207 "message": "Invalid cntlid range [0-65519]" 00:13:05.207 }' 00:13:05.207 00:38:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:05.207 { 00:13:05.207 "nqn": "nqn.2016-06.io.spdk:cnode3454", 00:13:05.207 "min_cntlid": 0, 00:13:05.207 "method": "nvmf_create_subsystem", 00:13:05.207 "req_id": 1 00:13:05.207 } 00:13:05.207 Got JSON-RPC error response 00:13:05.207 response: 00:13:05.207 { 00:13:05.207 "code": -32602, 00:13:05.207 "message": "Invalid cntlid range [0-65519]" 00:13:05.207 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:05.207 00:38:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25704 -i 65520 00:13:05.466 [2024-07-13 00:38:16.893777] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25704: invalid cntlid range [65520-65519] 00:13:05.466 00:38:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:05.466 { 00:13:05.466 "nqn": "nqn.2016-06.io.spdk:cnode25704", 00:13:05.466 "min_cntlid": 65520, 00:13:05.466 "method": "nvmf_create_subsystem", 00:13:05.466 "req_id": 1 00:13:05.466 } 00:13:05.466 Got JSON-RPC error response 00:13:05.466 response: 00:13:05.466 { 00:13:05.466 "code": -32602, 00:13:05.466 "message": "Invalid cntlid range [65520-65519]" 00:13:05.466 }' 00:13:05.466 00:38:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:05.466 { 00:13:05.466 "nqn": "nqn.2016-06.io.spdk:cnode25704", 00:13:05.466 "min_cntlid": 65520, 00:13:05.466 "method": "nvmf_create_subsystem", 00:13:05.466 "req_id": 1 00:13:05.466 } 00:13:05.466 Got JSON-RPC error response 00:13:05.466 response: 00:13:05.466 { 00:13:05.466 "code": -32602, 00:13:05.466 "message": "Invalid cntlid range [65520-65519]" 00:13:05.466 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:05.466 00:38:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4683 -I 0 00:13:05.726 [2024-07-13 00:38:17.154692] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4683: invalid cntlid range [1-0] 00:13:05.726 00:38:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:05.726 { 00:13:05.726 "nqn": "nqn.2016-06.io.spdk:cnode4683", 00:13:05.726 "max_cntlid": 0, 00:13:05.726 "method": "nvmf_create_subsystem", 00:13:05.726 "req_id": 1 00:13:05.726 } 00:13:05.726 Got JSON-RPC error response 00:13:05.726 response: 00:13:05.726 { 00:13:05.726 "code": -32602, 00:13:05.726 "message": "Invalid cntlid range [1-0]" 00:13:05.726 }' 00:13:05.726 00:38:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:05.726 { 00:13:05.726 "nqn": "nqn.2016-06.io.spdk:cnode4683", 00:13:05.726 "max_cntlid": 0, 00:13:05.726 "method": "nvmf_create_subsystem", 00:13:05.726 "req_id": 1 00:13:05.726 } 00:13:05.726 Got JSON-RPC error response 00:13:05.726 response: 00:13:05.726 { 00:13:05.726 "code": -32602, 00:13:05.726 "message": "Invalid cntlid range [1-0]" 00:13:05.726 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:05.726 00:38:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode515 -I 65520 00:13:05.985 [2024-07-13 00:38:17.347392] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode515: invalid cntlid range [1-65520] 00:13:05.985 00:38:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:05.985 { 00:13:05.985 "nqn": "nqn.2016-06.io.spdk:cnode515", 00:13:05.985 "max_cntlid": 65520, 00:13:05.985 "method": "nvmf_create_subsystem", 00:13:05.985 "req_id": 1 00:13:05.985 } 00:13:05.985 Got JSON-RPC error response 00:13:05.985 response: 00:13:05.985 { 00:13:05.985 "code": -32602, 00:13:05.985 "message": "Invalid cntlid range [1-65520]" 00:13:05.985 }' 00:13:05.985 00:38:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:05.985 { 00:13:05.985 "nqn": "nqn.2016-06.io.spdk:cnode515", 00:13:05.985 "max_cntlid": 65520, 00:13:05.985 "method": "nvmf_create_subsystem", 00:13:05.985 "req_id": 1 00:13:05.985 } 00:13:05.985 Got JSON-RPC error response 00:13:05.985 response: 00:13:05.985 { 00:13:05.985 "code": -32602, 00:13:05.985 "message": "Invalid cntlid range [1-65520]" 00:13:05.985 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:05.985 00:38:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13624 -i 6 -I 5 00:13:05.985 [2024-07-13 00:38:17.544062] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13624: invalid cntlid range [6-5] 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:06.244 { 00:13:06.244 "nqn": "nqn.2016-06.io.spdk:cnode13624", 00:13:06.244 "min_cntlid": 6, 00:13:06.244 "max_cntlid": 5, 00:13:06.244 "method": "nvmf_create_subsystem", 00:13:06.244 "req_id": 1 00:13:06.244 } 00:13:06.244 Got JSON-RPC error response 00:13:06.244 response: 00:13:06.244 { 00:13:06.244 "code": -32602, 00:13:06.244 "message": "Invalid cntlid range [6-5]" 00:13:06.244 }' 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:06.244 { 00:13:06.244 "nqn": "nqn.2016-06.io.spdk:cnode13624", 00:13:06.244 "min_cntlid": 6, 00:13:06.244 "max_cntlid": 5, 00:13:06.244 "method": "nvmf_create_subsystem", 00:13:06.244 "req_id": 1 00:13:06.244 } 00:13:06.244 Got JSON-RPC error response 00:13:06.244 response: 00:13:06.244 { 00:13:06.244 "code": -32602, 00:13:06.244 "message": "Invalid cntlid range [6-5]" 00:13:06.244 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:06.244 { 00:13:06.244 "name": "foobar", 00:13:06.244 "method": "nvmf_delete_target", 00:13:06.244 "req_id": 1 00:13:06.244 } 00:13:06.244 Got JSON-RPC error response 00:13:06.244 response: 00:13:06.244 { 00:13:06.244 "code": -32602, 00:13:06.244 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:06.244 }' 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:06.244 { 00:13:06.244 "name": "foobar", 00:13:06.244 "method": "nvmf_delete_target", 00:13:06.244 "req_id": 1 00:13:06.244 } 00:13:06.244 Got JSON-RPC error response 00:13:06.244 response: 00:13:06.244 { 00:13:06.244 "code": -32602, 00:13:06.244 "message": "The specified target doesn't exist, cannot delete it." 00:13:06.244 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:06.244 rmmod nvme_tcp 00:13:06.244 rmmod nvme_fabrics 00:13:06.244 rmmod nvme_keyring 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1296119 ']' 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1296119 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1296119 ']' 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1296119 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1296119 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1296119' 00:13:06.244 killing process with pid 1296119 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1296119 00:13:06.244 00:38:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1296119 00:13:06.503 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:06.503 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:06.503 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:06.503 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.503 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:06.503 00:38:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.503 00:38:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.503 00:38:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.041 00:38:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:09.041 00:13:09.041 real 0m12.162s 00:13:09.041 user 0m20.102s 00:13:09.041 sys 0m5.275s 00:13:09.041 00:38:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:09.041 00:38:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.041 ************************************ 00:13:09.041 END TEST nvmf_invalid 00:13:09.041 ************************************ 00:13:09.041 00:38:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:09.041 00:38:20 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:09.041 00:38:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:09.041 00:38:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.041 00:38:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:09.041 ************************************ 00:13:09.041 START TEST nvmf_abort 00:13:09.041 ************************************ 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:09.041 * Looking for test storage... 00:13:09.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:09.041 00:38:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:14.317 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:14.317 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:14.317 Found net devices under 0000:86:00.0: cvl_0_0 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:14.317 Found net devices under 0000:86:00.1: cvl_0_1 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:14.317 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.576 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.576 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.576 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:14.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:13:14.576 00:13:14.576 --- 10.0.0.2 ping statistics --- 00:13:14.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.576 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:13:14.576 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:13:14.576 00:13:14.577 --- 10.0.0.1 ping statistics --- 00:13:14.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.577 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1300388 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1300388 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1300388 ']' 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.577 00:38:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.577 [2024-07-13 00:38:26.035506] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:14.577 [2024-07-13 00:38:26.035550] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.577 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.577 [2024-07-13 00:38:26.104905] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:14.836 [2024-07-13 00:38:26.145600] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.836 [2024-07-13 00:38:26.145639] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.836 [2024-07-13 00:38:26.145646] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.836 [2024-07-13 00:38:26.145652] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.836 [2024-07-13 00:38:26.145657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.836 [2024-07-13 00:38:26.145767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.836 [2024-07-13 00:38:26.145891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.836 [2024-07-13 00:38:26.145892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.836 [2024-07-13 00:38:26.271381] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.836 Malloc0 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.836 Delay0 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.836 [2024-07-13 00:38:26.345325] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.836 00:38:26 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:14.836 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.094 [2024-07-13 00:38:26.503387] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:17.696 Initializing NVMe Controllers 00:13:17.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:17.696 controller IO queue size 128 less than required 00:13:17.696 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:17.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:17.696 Initialization complete. Launching workers. 00:13:17.696 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 41054 00:13:17.696 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41119, failed to submit 62 00:13:17.696 success 41058, unsuccess 61, failed 0 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:17.696 rmmod nvme_tcp 00:13:17.696 rmmod nvme_fabrics 00:13:17.696 rmmod nvme_keyring 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1300388 ']' 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1300388 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1300388 ']' 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1300388 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1300388 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1300388' 00:13:17.696 killing process with pid 1300388 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1300388 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1300388 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.696 00:38:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.604 00:38:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:19.604 00:13:19.604 real 0m10.931s 00:13:19.604 user 0m11.662s 00:13:19.604 sys 0m5.291s 00:13:19.604 00:38:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:19.604 00:38:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:19.604 ************************************ 00:13:19.604 END TEST nvmf_abort 00:13:19.604 ************************************ 00:13:19.604 00:38:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:19.604 00:38:31 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:19.604 00:38:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:19.604 00:38:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.604 00:38:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:19.604 ************************************ 00:13:19.604 START TEST nvmf_ns_hotplug_stress 00:13:19.604 ************************************ 00:13:19.604 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:19.864 * Looking for test storage... 00:13:19.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:19.864 00:38:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.434 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:26.435 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:26.435 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:26.435 Found net devices under 0000:86:00.0: cvl_0_0 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:26.435 Found net devices under 0000:86:00.1: cvl_0_1 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:26.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:13:26.435 00:13:26.435 --- 10.0.0.2 ping statistics --- 00:13:26.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.435 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:13:26.435 00:13:26.435 --- 10.0.0.1 ping statistics --- 00:13:26.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.435 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:26.435 00:38:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1304279 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1304279 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1304279 ']' 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.435 [2024-07-13 00:38:37.075477] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:26.435 [2024-07-13 00:38:37.075527] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.435 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.435 [2024-07-13 00:38:37.151635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:26.435 [2024-07-13 00:38:37.192567] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.435 [2024-07-13 00:38:37.192610] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.435 [2024-07-13 00:38:37.192617] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.435 [2024-07-13 00:38:37.192623] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.435 [2024-07-13 00:38:37.192629] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.435 [2024-07-13 00:38:37.192756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.435 [2024-07-13 00:38:37.192864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.435 [2024-07-13 00:38:37.192866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.435 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:26.436 00:38:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:26.694 [2024-07-13 00:38:38.080611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.694 00:38:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.952 00:38:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.952 [2024-07-13 00:38:38.441903] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.952 00:38:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:27.211 00:38:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:27.470 Malloc0 00:13:27.470 00:38:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:27.728 Delay0 00:13:27.728 00:38:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.728 00:38:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:28.027 NULL1 00:13:28.027 00:38:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:28.286 00:38:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:28.286 00:38:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1304769 00:13:28.286 00:38:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:28.286 00:38:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.286 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.222 Read completed with error (sct=0, sc=11) 00:13:29.222 00:38:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.222 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.481 00:38:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:29.481 00:38:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:29.740 true 00:13:29.740 00:38:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:29.740 00:38:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.677 00:38:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.677 00:38:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:30.677 00:38:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:30.935 true 00:13:30.935 00:38:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:30.935 00:38:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.194 00:38:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.194 00:38:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:31.194 00:38:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:31.451 true 00:13:31.451 00:38:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:31.451 00:38:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.826 00:38:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:32.826 00:38:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:32.826 00:38:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:33.084 true 00:13:33.084 00:38:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:33.084 00:38:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:34.021 00:38:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.021 00:38:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:34.021 00:38:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:34.280 true 00:13:34.280 00:38:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:34.280 00:38:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.280 00:38:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.538 00:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:34.538 00:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:34.797 true 00:13:34.797 00:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:34.797 00:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.056 00:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.056 00:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:35.056 00:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:35.315 true 00:13:35.315 00:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:35.315 00:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.574 00:38:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.574 00:38:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:35.574 00:38:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:35.832 true 00:13:35.832 00:38:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:35.832 00:38:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.091 00:38:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.091 00:38:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:36.091 00:38:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:36.350 true 00:13:36.350 00:38:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:36.350 00:38:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.608 00:38:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.867 00:38:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:36.867 00:38:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:36.867 true 00:13:36.867 00:38:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:36.867 00:38:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.246 00:38:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.246 00:38:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:38.246 00:38:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:38.503 true 00:13:38.503 00:38:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:38.503 00:38:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.441 00:38:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.441 00:38:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:39.441 00:38:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:39.796 true 00:13:39.796 00:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:39.796 00:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.796 00:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.054 00:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:40.054 00:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:40.054 true 00:13:40.312 00:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:40.312 00:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.312 00:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.571 00:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:40.571 00:38:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:40.829 true 00:13:40.829 00:38:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:40.829 00:38:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.829 00:38:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.088 00:38:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:41.088 00:38:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:41.346 true 00:13:41.346 00:38:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:41.346 00:38:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.539 00:38:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.539 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:42.539 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:42.798 true 00:13:42.798 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:42.798 00:38:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.732 00:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.733 00:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:43.733 00:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:43.991 true 00:13:43.991 00:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:43.991 00:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.250 00:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.508 00:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:44.508 00:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:44.508 true 00:13:44.508 00:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:44.508 00:38:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.885 00:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.885 00:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:45.885 00:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:46.144 true 00:13:46.144 00:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:46.144 00:38:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.081 00:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.081 00:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:47.081 00:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:47.339 true 00:13:47.339 00:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:47.339 00:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.598 00:38:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.598 00:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:47.598 00:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:47.856 true 00:13:47.856 00:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:47.856 00:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.119 00:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.119 [2024-07-13 00:38:59.649789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.119 [2024-07-13 00:38:59.649871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.649922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.649969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.650963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.651999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.652993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.653981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.654012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.654049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.654087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.654126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.654164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.654205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.654252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.654297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.120 [2024-07-13 00:38:59.654336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.654997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.655037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.655073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.655117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.655163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.655199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.656982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.657977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.658975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.659016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.659056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.659097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.659131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.659174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.659215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.659260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.659305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.659350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.659392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.659438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.659489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.659532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.121 [2024-07-13 00:38:59.659575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.660965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.661997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.662959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.663979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.664028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.664068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.664111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.664168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.664209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.664261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.664301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.664339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.664371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.664409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.664448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.664482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.122 [2024-07-13 00:38:59.664563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.664603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.664639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.664675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.664752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.664789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.664828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.664858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.664895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.664932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.664971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.665010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.665045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.665083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.665132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.665174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.665210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.665254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.665297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.665327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.665364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.665403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.665443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.665484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.665524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.666349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.666400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.666445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.666487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.666532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.666581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.666627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.666673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.666721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.666770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.666811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.666856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.666901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.666950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.666996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.667972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.668972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.123 [2024-07-13 00:38:59.669929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.669974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.670485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.670535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.670577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.670631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.670677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.670725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.670776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.670830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.670879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.670921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.670968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.671981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.672971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:48.124 [2024-07-13 00:38:59.673834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.673979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.674020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.674058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.124 [2024-07-13 00:38:59.674097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.125 [2024-07-13 00:38:59.674976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.402 [2024-07-13 00:38:59.675020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.402 [2024-07-13 00:38:59.675067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.402 [2024-07-13 00:38:59.675113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.402 [2024-07-13 00:38:59.675157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.675998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.676794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.676839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.676880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.676924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.676964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.676995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.677995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.678982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.679984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.680029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.680076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.680117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.680155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.680195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.680228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.403 [2024-07-13 00:38:59.680266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.680308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.680695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 00:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:48.404 [2024-07-13 00:38:59.680746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.680785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.680822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.680861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.680902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.680942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.680986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 00:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:48.404 [2024-07-13 00:38:59.681111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.681963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.682982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.683030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.683073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.683109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.683156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.683194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.683245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.683291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.683332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.683369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.683410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.683447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.684985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.685033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.685072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.685113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.685166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.685210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.685258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.685302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.685345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.685388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.685433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.404 [2024-07-13 00:38:59.685477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.685506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.685548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.685590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.685628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.685667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.685706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.685743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.685779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.685817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.685850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.685889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.685928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.685967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.686993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.687040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.687093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.687136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.687180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.687235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.687280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.687326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.687378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.687423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.687468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.687516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.687560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.688991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.689972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.690011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.690047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.690085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.690129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.690173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.405 [2024-07-13 00:38:59.690214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.690251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.690295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.690335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.690371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.690411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.690449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.690489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.690530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.690568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.690608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.690655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.690694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.690880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.690926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.690975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.691981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.692980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.693020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.693059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.693100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.693137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.693176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.693215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.693260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.693296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.693336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.693375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.693414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.693455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.693501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.694317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.694370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.694414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.694460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.694505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.406 [2024-07-13 00:38:59.694550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.694597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.694647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.694693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.694738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.694783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.694824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.694868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.694911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.694955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.695990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.696966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.697003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.697040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.697239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.697282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.697321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.697360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.697401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.697440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.697482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.697523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.697562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.697602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.407 [2024-07-13 00:38:59.697640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.697683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.697730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.697771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.697815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.697860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.697909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.697960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.698969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.699901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.700721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.700770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.700816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.700871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.700914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.700968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.408 [2024-07-13 00:38:59.701843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.701887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.701930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.701976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.702968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.703978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.704018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.704058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.704100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.704143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.704187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.704239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.704284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.704327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.704833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.704879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.704925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.704971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.705972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.706010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.706044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.706086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.706133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.706177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.706222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.706268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.706308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.706353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.706396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.706435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.706467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.706503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.706547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.409 [2024-07-13 00:38:59.706587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.706624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.706663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.706703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.706749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.706787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.706828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.706867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.706904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.706942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.706983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.707958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.708968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.709994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.710033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.710075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.410 [2024-07-13 00:38:59.710114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.710153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.710192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.710235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.710277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.710318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.710353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.710394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.711987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.712968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.713944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.714870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.715369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.715414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.715463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.715513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.715556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.715603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.715654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.411 [2024-07-13 00:38:59.715696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.715742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.715791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.715835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.715884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.715924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.715962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.716960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.717975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.718023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.718205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.718261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.718310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.718358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.718403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.718436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.412 [2024-07-13 00:38:59.718474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.718513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.718552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.718590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.718634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.718683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.718726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.718756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.718800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.718836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.718884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.718930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.718978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.719987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.720868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.721684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.721727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.721766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.721813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.721844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.721883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.721919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.721960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.721999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.722036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.722080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.722127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.722164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.722203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.722250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.722293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.722324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.722367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.722412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.413 [2024-07-13 00:38:59.722453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.722494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.722532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.722573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.722608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.722646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.722683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.722722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.722757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.722799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.722842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.722891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.722937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.722988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.723952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.724970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.725014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.725057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.725090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.725132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.725170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.725209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.725255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:48.414 [2024-07-13 00:38:59.725734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.725785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.725826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.725873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.725914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.725950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.725989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.726957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.727003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.727048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.727098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.727143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.727190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.727247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.727295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.727339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.727386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.727432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.414 [2024-07-13 00:38:59.727476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.727520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.727564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.727609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.727654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.727703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.727750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.727794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.727851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.727896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.727941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.727986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.728972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.729960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.730976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.731018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.731064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.731111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.731158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.731198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.731253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.731299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.731344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.731388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.731428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.415 [2024-07-13 00:38:59.732905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.732944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.732984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.733988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.734918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.735991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.736978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.737021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.737050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.737088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.737125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.737157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.737198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.737238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.737282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.737323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.737366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.737404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.737441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.737479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.737514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.416 [2024-07-13 00:38:59.737556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.737603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.737649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.737692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.738501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.738551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.738594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.738638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.738690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.738736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.738778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.738823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.738868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.738914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.738967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.739958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.740996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.741953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.742006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.742053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.742549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.742601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.417 [2024-07-13 00:38:59.742645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.742689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.742737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.742780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.742823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.742856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.742902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.742938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.742979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.743989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.744964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.745981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.746974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.747010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.747053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.747097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.747138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.747175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.747218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.747267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.418 [2024-07-13 00:38:59.747311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.747973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.748016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.748057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.748095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.748900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.748953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.748994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.749989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.750975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.751954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.752000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.752043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.752092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.752137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.752178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.752223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.752272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.752316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.752363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.752406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.752451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.752931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.752979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.753033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.419 [2024-07-13 00:38:59.753077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.753973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.754980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.755987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.756999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.420 [2024-07-13 00:38:59.757637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.757678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.757716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.757757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.757795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.757846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.757888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.757934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.757981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.758027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.758072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.758118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.758161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.758207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.758260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.758304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.758350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.758396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.759987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.760982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.761830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.762025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.762072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.762121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.762164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.762202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.762240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.762285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.421 [2024-07-13 00:38:59.762328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.762364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.762401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.762443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.762488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.762527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.762556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.762599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.762636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.762675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.763980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.764974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.765867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.766447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.766496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.766541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.766578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.766620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.766657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.766694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.766733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.766775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.766820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.766864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.766908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.766956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.422 [2024-07-13 00:38:59.767925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.767972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.768981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.769968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.770969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.771963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.772005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.772036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.772079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.772121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.772164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.772958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.773000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.773039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.773080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.773121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.773158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.773195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.773240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.773277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.773315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.773357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.423 [2024-07-13 00:38:59.773398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.773438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.773480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.773524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.773572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.773615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.773661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.773716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.773757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.773799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.773844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.773894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.773937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.773981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.774999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:48.424 [2024-07-13 00:38:59.775855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.775996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.776973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.777970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.778013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.778062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.778100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.778130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.424 [2024-07-13 00:38:59.778167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.778205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.778251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.778294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.778341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.778386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.778417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.778457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.778493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.778532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.779368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.779419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.779463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.779514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.779556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.779598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.779650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.779693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.779736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.779781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.779826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.779870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.779912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.779957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.780975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.781974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.782995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.783493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.783543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.783586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.783630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.783681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.425 [2024-07-13 00:38:59.783727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.783772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.783822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.783867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.783907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.783955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.784961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.785999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.786966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.426 [2024-07-13 00:38:59.787902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.787951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.788988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.789778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.789827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.789859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.789897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.789938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.789975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.790992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.791960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.792982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.793019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.793061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.793104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.793143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.793182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.793222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.793273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.793315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.793354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.793395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.427 [2024-07-13 00:38:59.793939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.793988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.794986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.795965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.796997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.797951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.428 [2024-07-13 00:38:59.798789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.798836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.798884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.798931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.798976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.799444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.799490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.799527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.799568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.799613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.799656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.799696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.799739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.799782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.799828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.799858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.799903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.799942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.799985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.800976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.801957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.802957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.803004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.803046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.803713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.803759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.803801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.803843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.803882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.803921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.803968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.804007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.804047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.804076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.804125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.804163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.804206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.804257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.804304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.804351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.804396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.429 [2024-07-13 00:38:59.804447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.804492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.804538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.804583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.804635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.804684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.804725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.804772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.804824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.804869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.804915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.804960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.805962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.806003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.806049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.806095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.806137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.806181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.806236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.806279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.806324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.806369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.806417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.806458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.806505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.806548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.807962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.808977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.430 [2024-07-13 00:38:59.809022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.809867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.810981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.811963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.812844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.813616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.813661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.813694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.813733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.813772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.813812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.813857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.813896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.813936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.813975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.814017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.814057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.814095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.814138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.814177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.814228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.814274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.814321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.814367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.814412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.431 [2024-07-13 00:38:59.814458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.814504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.814548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.814593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.814641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.814684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.814729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.814776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.814822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.814869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.814915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.814960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.815976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.816993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.817991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.818990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.819040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.819087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.819134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.819176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.819220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.819274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.432 [2024-07-13 00:38:59.819332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.819375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.820963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.821987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.822915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.823983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.433 [2024-07-13 00:38:59.824692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.824733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.824782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.824831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.824876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.824922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.824963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.825838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:48.434 [2024-07-13 00:38:59.826621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.826669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.826717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.826748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.826786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.826826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.826867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.826906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.826948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.826986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.827987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.828964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.829011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.829064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.829105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.829151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.829195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.829241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.829285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.434 [2024-07-13 00:38:59.829336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.829380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.829573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.829606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.829645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.829684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.829720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.829756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.829800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.829848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.829887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.829925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.829961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.829998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.830041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.830083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.830115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.830153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.830196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.830597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.830644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.830683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.830723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.830765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.830807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.830848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.830890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.830927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.830966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.831968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.832998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.833044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.833086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.833129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.833167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.833210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.833254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.833296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.833341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.833905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.833946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.833985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.435 [2024-07-13 00:38:59.834767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.834811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.834855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.834908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.834957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.834998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.835979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.836996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.837975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.838984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.839029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.839075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.839126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.839174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.839218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.839270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.839317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.839360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.839404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.839451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.839499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.839545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.436 [2024-07-13 00:38:59.839606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.839649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.840407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.840454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.840492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.840530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.840573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.840621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.840664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.840706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.840743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.840781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.840822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.840852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.840898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.840934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.840974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.841971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.842956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.843001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.843050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.843094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.843137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.843715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.843758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.843801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.843839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.843881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.843919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.843960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.843999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.844980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.845027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.845078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.845122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.845164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.845208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.845255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.845305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.437 [2024-07-13 00:38:59.845347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.845394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.845441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.845482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.845527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.845574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.845616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.845662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.845710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.845756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.845797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.845845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.845890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.845936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.845984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.846994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.847032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.847071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.847116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.847160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.847196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.847244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.847643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.847684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.847721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.847763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.847804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.847846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.847886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.847929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.847970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.848983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.438 [2024-07-13 00:38:59.849692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.849738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.849779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.849825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.849872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.849913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.849962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.850012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.850058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.850104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.850150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.850191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.850235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.850280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.850319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.850361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.850924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.850969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.851928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 true 00:13:48.439 [2024-07-13 00:38:59.851971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.852971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.853978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.854010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.854049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.854087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.854132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.854175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.854223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.854272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.854311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.854355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.854398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.854440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.854482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.855074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.855120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.855164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.855201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.855246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.855288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.855332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.855376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.439 [2024-07-13 00:38:59.855422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.855464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.855511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.855555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.855603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.855650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.855695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.855739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.855783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.855828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.855869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.855917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.855961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.856995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.857985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.858988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.859973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.860017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.860710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.860750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.860789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.440 [2024-07-13 00:38:59.860835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.860883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.860926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.860967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.861979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.862988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.863975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.864984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.865593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.865643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.865688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.865731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.865773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.865808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.865838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.865883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.865925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.865964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.866006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.441 [2024-07-13 00:38:59.866046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.866970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.867987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.868969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.869962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.442 [2024-07-13 00:38:59.870590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.870627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.870663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.870715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.870758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.870810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.870853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.870898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.870942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.870984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.871030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.871827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.871876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.871919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.871966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.872960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.873966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.874987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.875030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.875075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.875126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.875169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.875216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.443 [2024-07-13 00:38:59.875266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.875308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.875353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.875405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:48.444 [2024-07-13 00:38:59.875885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.875925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.875974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 00:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:48.444 [2024-07-13 00:38:59.876055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 00:38:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.444 [2024-07-13 00:38:59.876435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.876984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.877993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.878994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.879977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.880018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.880056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.880093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.880130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.880171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.444 [2024-07-13 00:38:59.880211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.880983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.881036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.881083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.881129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.881175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.881215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.881265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.881316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.881364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.881409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.881454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.882990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.883974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.884997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.885034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.885234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.885273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.885311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.885352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.885389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.885427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.885474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.885521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.885561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.885597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.885632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.885672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.445 [2024-07-13 00:38:59.885717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.885756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.885795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.885834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.885875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.885920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.885958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.885993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.886973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.887929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.888697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.888746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.888787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.888829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.888868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.888904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.888935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.888974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.446 [2024-07-13 00:38:59.889734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.889782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.889828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.889872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.889919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.889964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.890979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.891998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.892971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.893014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.893062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.893114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.893159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.893201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.893252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.893299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.893343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.893380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.893428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.447 [2024-07-13 00:38:59.893471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.893513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.893545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.893584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.893625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.893659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.893703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.893746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.893787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.893834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.893875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.893918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.893956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.893997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.894031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.894077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.894121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.894168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.894212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.894266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.895967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.896979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.897996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.448 [2024-07-13 00:38:59.898960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.899982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.900641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.901469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.901524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.901570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.901618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.901660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.901706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.901757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.901803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.901850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.901901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.901943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.901987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.902988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.903971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.904007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.904046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.904087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.904132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.449 [2024-07-13 00:38:59.904170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.904373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.904414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.904452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.904490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.904526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.904564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.904608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.904651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.904702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.904749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.904795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.904845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.904893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.904941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.904986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.905029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.905074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.905554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.905603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.905651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.905697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.905742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.905786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.905827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.905869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.905914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.905956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.906980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.907984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.908997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.909038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.909084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.909126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.909169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.909213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.909262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.909311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.909360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.909403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.450 [2024-07-13 00:38:59.909451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.909500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.909541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.909584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.909633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.909667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.909704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.909743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.909784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.909826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.909866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.909903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.909939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.909976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.910985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.911030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.911869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.911919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.911968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.912979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.913960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.914000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.914040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.914081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.451 [2024-07-13 00:38:59.914120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.914997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.915042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.915086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.915133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.915175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.915217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.915269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.915318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.915363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.915409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.915459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.915505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.916971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.917978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.918983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.919021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.919062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.919109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.919146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.452 [2024-07-13 00:38:59.919179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.919986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.920992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.921036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.921076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.921118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.921164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.921204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.921248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.921295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.921339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.921385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.921427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.921464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.922987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.923996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.924045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.924091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.924140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.924184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.924233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.924280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.924321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.453 [2024-07-13 00:38:59.924365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.924416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.924461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.924507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.924549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.924592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.924629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.924669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.924699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.924736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.924779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.924820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.924861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.925643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 Message suppressed 999 times: [2024-07-13 00:38:59.925688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 Read completed with error (sct=0, sc=15) 00:13:48.454 [2024-07-13 00:38:59.925733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.925764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.925804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.925841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.925884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.925929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.925969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.926977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.927966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.928962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.454 [2024-07-13 00:38:59.929712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.929758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.929802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.929850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.929894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.929938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.929986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.930981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.931021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.931062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.931106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.931151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.931192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.931232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.932958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.933981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.934821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.935011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.935057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.935095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.935136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.935175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.935215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.455 [2024-07-13 00:38:59.935258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.935975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.936999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.937777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.938552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.938592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.938631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.938670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.938706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.938745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.938789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.938827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.938866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.938908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.938947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.938989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.456 [2024-07-13 00:38:59.939918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.939964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.940962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.941964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.942972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.943018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.943066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.943113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.943159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.943201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.943247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.943292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.943337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.943386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.943431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.943477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.943523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.943570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.457 [2024-07-13 00:38:59.943619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.943672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.943715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.943759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.943805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.943852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.943895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.943939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.943987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.944035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.944078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.944121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.944157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.944199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.944250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.944290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.731 [2024-07-13 00:38:59.945907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.945948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.946988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.947969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.948018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.948063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.948108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.948150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.948194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.948246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.948293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.948343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.948387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.948434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.948488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.948531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.948576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.948624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.949985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.950984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.951030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.732 [2024-07-13 00:38:59.951079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.951847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.952965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.953970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.954610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.955424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.955472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.955520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.955565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.955612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.955664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.955706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.955753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.955797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.955841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.955888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.955931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.955972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.956031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.956070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.956116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.956159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.956200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.956255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.956308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.956352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.956396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.733 [2024-07-13 00:38:59.956441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.956485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.956531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.956578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.956629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.956679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.956727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.956780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.956822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.956865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.956908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.956947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.956985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.957990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.958952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.959002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.959512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.959563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.959609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.959656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.959699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.959745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.959794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.959840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.959889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.959937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.959983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.960960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.961001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.961039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.961079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.961125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.961165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.961208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.961260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.961301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.961338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.961378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.961414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.734 [2024-07-13 00:38:59.961453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.961485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.961533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.961577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.961623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.961666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.961713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.961759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.961805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.961848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.961893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.961943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.961988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.962982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.963992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.964966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.965005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.965045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.965864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.965916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.965959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.966004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.735 [2024-07-13 00:38:59.966059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.966997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.967985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.968967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.969011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.969052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.969096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.969147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.969192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.969244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.969290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.969336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.969382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.969424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.969466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.969512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.970999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.971037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.736 [2024-07-13 00:38:59.971080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.971986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.972993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.973985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.974974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.975012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.975048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.975093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.975139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.975186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.975233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.975277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.975330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.975372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.975420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.975470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:48.737 [2024-07-13 00:38:59.976279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.976333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.976380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.737 [2024-07-13 00:38:59.976428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.976472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.976517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.976562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.976604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.976649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.976700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.976742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.976788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.976827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.976874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.976925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.976973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.977979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.978913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.979801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.980958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.981003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.981048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.981091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.981137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.981182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.981233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.981279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.981324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.738 [2024-07-13 00:38:59.981366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.981416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.981469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.981517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.981562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.981611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.981654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.981701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.981751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.981799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.981845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.981891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.981932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.981976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.982950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.983515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.983566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.983612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.983652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.983696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.983739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.983777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.983816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.983856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.983895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.983931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.983970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.984958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.985974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.986017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.986062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.986104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.986145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.986191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.986241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.986275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.986320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.986527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.986572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.986612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.986656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.986698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.986731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.739 [2024-07-13 00:38:59.986771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.986809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.986849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.986889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.986931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.986968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.987960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.988960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.989006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.989050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.989095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.989128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.989167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.989210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.989254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.989297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.990989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.991036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.991090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.991136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.991182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.991231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.991277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.991324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.991370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.991421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.991466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.740 [2024-07-13 00:38:59.991511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.991556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.991602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.991646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.991701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.991743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.991789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.991834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.991874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.991921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.991966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.992877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.993983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.994988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.995032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.995077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.995121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.995179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.995223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.995271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.995313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.995372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.995416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.995462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.995518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.995562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.741 [2024-07-13 00:38:59.995607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.995654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.995703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.995745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.995789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.995833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.996611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.996656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.996697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.996739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.996783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.996828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.996864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.996902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.996948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.996990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.997961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.998977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:38:59.999963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.742 [2024-07-13 00:39:00.000005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.000990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.001992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.002042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.002081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.002125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.002172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.002218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.002271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.003990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.743 [2024-07-13 00:39:00.004838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.004881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.004928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.004970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.005978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.006018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.006061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.006102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.006143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.006185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.006232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.006273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.006320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.006366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.006418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.006465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.006512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.006565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.006609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.007992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.744 [2024-07-13 00:39:00.008914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.008954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.008995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.009995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.010967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.011977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.012018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.012058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.012099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.012136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.012179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.012221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.012263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.012307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.012350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.012396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.012443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.012485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.012534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.012573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.745 [2024-07-13 00:39:00.012614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.012654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.012701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.012735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.012776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.012818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.012863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.012902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.012942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.012980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.013020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.013059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.013879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.013929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.013978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.014997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.015981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.016022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.016062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.016106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.016147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.016178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.016218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.016261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.016303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.746 [2024-07-13 00:39:00.016343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.016382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.016424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.016461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.016501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.016540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.016580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.016777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.016824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.016869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.016913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.016957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.017961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.018971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.019019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.019060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.019092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.019131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.019170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.019211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.019265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.019310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.019353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.019391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.019435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.019474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.019512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.019552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.020957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.021004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.021053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.021101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.021147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.021195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.021250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.021297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.747 [2024-07-13 00:39:00.021352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.021399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.021443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.021492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.021535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.021583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.021629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.021674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.021720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.021768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.021810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.021856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.021904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.021946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.021988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.022991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.023876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.023934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.023980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.024993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.025031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.025073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.025111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.025156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.025190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.025236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.025277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.025318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.025365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.025407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.025453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.748 [2024-07-13 00:39:00.025495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.025533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.025573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.025612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.025652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.025691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.025728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.025770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.025815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.025862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.025913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.025955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.026970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.027989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.028953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.029001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.029049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.029092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.029140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.029184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.029234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.029283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.029331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.029380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.029423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.749 [2024-07-13 00:39:00.029472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.029522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.029572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.029620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:48.750 [2024-07-13 00:39:00.030447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.030501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.030549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.030601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.030644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.030683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.030732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.030763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.030804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.030844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.030881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.030928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.030967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.031999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.032967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.750 [2024-07-13 00:39:00.033945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.033986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.034018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.034064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.034106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.034521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.034570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.034611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.034656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.034705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.034750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.034792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.034832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.034874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.034919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.034962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.034995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.035978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.036989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.037033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.037078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.037122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.037165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.037205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.037253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.037289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.037324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.037884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.037928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.037966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.038988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.039037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.039083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.039124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.039174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.039228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.039273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.039316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.039366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.039410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.039453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.751 [2024-07-13 00:39:00.039509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.039556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.039605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.039653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.039696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.039743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.039788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.039837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.039887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.039934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.039976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.040692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.041992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.042996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.043995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.752 00:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.752 [2024-07-13 00:39:00.248769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.248840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.248885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.248930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.248976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.249015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.752 [2024-07-13 00:39:00.249062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.249973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.250994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.251965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.252963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.753 [2024-07-13 00:39:00.253615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.253651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.253691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.253722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.253768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.253808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.253852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.253895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.253939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.253980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.254017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.254056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.254097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.254135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.254172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.254208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.254244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.255965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.256955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.257831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.258038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.258077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.258120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.258161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.258198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.754 [2024-07-13 00:39:00.258243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.258993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.259970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.260674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:48.755 [2024-07-13 00:39:00.261495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.261536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.261574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.261615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.261652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.261692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.261735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.261776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.261813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.261848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.261890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.261937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.261982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.262976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.263018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.263064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.263121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.263164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.263210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.263259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.263304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.263359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.263401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.263445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.755 [2024-07-13 00:39:00.263497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.263544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.263590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.263631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.263660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.263698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.263736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.263773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.263811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.263855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.263894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.263935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.263974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.264990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.265976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.266979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.267032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.267069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.267106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.267587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.267627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.267664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.267700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.267738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.267777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.267816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.267853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.267899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.267943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.267986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.268028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.268076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.268107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.268145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.268185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.268232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.268274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.268313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.268353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.268395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.268436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.268476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.756 [2024-07-13 00:39:00.268519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.268561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.268602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.268639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.268683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.268719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.268763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.268809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.268852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.268894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.268938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.268982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.269968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.270011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.270057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.270101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.270150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.270211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.270268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.270316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.270360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.271973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.272963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.273884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.274106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.757 [2024-07-13 00:39:00.274148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.274978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.275994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.276032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.276077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.276118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.276162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:48.758 [2024-07-13 00:39:00.276205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.276263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.276305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.276353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.276403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.276447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.276492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.276542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.276587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.276630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.276674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.276719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.277544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.277593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.277641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.277684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.277726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.277772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.277820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.277863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.277907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.277951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.277997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.278992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.279978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.280021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.280064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.280113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.280157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.280198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.280383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.280431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.280487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.280537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 [2024-07-13 00:39:00.280581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.037 00:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:49.038 [2024-07-13 00:39:00.280628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.280675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.280721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.280767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.280817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.280861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.280904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 00:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:49.038 [2024-07-13 00:39:00.280951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.280997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.281972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.282953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.283005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.283049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.283884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.283937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.283983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.284974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.038 [2024-07-13 00:39:00.285678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.285716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.285755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.285793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.285840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.285885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.285925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.285957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.285995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.286960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.287979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.288992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.289030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.289071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.289113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.289156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.289198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.289239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.289275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.289313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.289352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.289395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.289434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.290992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.291040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.039 [2024-07-13 00:39:00.291083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.291970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.292898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.293819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.294993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.295994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.296035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.040 [2024-07-13 00:39:00.296077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.296979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.297985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.298977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.299866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.041 [2024-07-13 00:39:00.300688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.300735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.300783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.300824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.300867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.300899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.300933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.300973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.301966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.302988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.303990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.304988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.305032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.305079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.305135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.305178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.042 [2024-07-13 00:39:00.305223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.305960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.306008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.306055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.306116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.306163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.306209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.306263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.306310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.307989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.308970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.309847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.310040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.310086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.310127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.310164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.310210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.310247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.310289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.310328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.310368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.310409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.310448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.310490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.310535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:49.043 [2024-07-13 00:39:00.310585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.043 [2024-07-13 00:39:00.310628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.310668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.310709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.311978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.312964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.313841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.314993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.044 [2024-07-13 00:39:00.315835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.315883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.315927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.315974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.316019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.316068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.316118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.316162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.316854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.316904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.316947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.316994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.317972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.318993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.319979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.320019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.320059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.320101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.320141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.320178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.320231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.320272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.320315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.045 [2024-07-13 00:39:00.320353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.320394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.320436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.320475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.320513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.320553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.320591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.320629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.320671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.320701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.320743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.320783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.320827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.320874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.320920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.320967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.321013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.321064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.321706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.321746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.321791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.321839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.321888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.321938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.321986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.322973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.323963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.324993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.325043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.325087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.325128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.325176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.046 [2024-07-13 00:39:00.325221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.325275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.325325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.325374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.325418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.325462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.325509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.325558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.325607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.325650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.325703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.325748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.325796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.325841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.325887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.325935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.326514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.326565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.326610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.326643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.326680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.326723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.326763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.326800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.326838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.326885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.326925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.326963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.327979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.328982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.329982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.330024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.330061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.047 [2024-07-13 00:39:00.330098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.330971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.331973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.332785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.332831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.332877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.332913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.332957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.333968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.334983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.335033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.335075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.335120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.048 [2024-07-13 00:39:00.335172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.335217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.335266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.335319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.335362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.335407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.335453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.335496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.335542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.335588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.335774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.335821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.335864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.335907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.335952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.335994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.336040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.336081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.336120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.336158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.336188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.336232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.336269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.336309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.336355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.336403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.336450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.336843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.336887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.336927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.336971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.337976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.338959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.339002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.339049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.339099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.339143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.339186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.339232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.339268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.339308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.339344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.339385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.339429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.339475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.339520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.340068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.340111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.340150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.340191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.340236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.340278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.340316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.340355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.340394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.340434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.340476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.049 [2024-07-13 00:39:00.340515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.340556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.340596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.340633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.340674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.340716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.340764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.340813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.340858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.340907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.340956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.340997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.341973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.342812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.343958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.344003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.344047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.050 [2024-07-13 00:39:00.344089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.344985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.345639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.346453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.346502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.346555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.346600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.346651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.346696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.346749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.346794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.346841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.346886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.346930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.346973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.347997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.348979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.349020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.349059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.349098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.051 [2024-07-13 00:39:00.349140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.349181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.349388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.349433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.349468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.349507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.349546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.349589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.349639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.349685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.349730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.349778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.349826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.349871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.349918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.349962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.350003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.350052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.350098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.350620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.350670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.350718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.350774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.350820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.350865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.350919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.350963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.351996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.352989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.353961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.354009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.354053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.354101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.354147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.052 [2024-07-13 00:39:00.354193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.354980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.355983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.356021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.356061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.356105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.356920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.356956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.356993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.357977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.358979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.359022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.359070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.359118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.359177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.053 [2024-07-13 00:39:00.359221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.359272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.359315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.359368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.359413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.359460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.359506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.359552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.359597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.359639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.359691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.359877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.359925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.359972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.360016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.360063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.360106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.360151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.360201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.360249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.360290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.360321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.360361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.360403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.360445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.360493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.360545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.360593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.360988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:49.054 [2024-07-13 00:39:00.361113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.361971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.362982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.363030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.363082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.363126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.363174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.363219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.363270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.363313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.363360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.363408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.363451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.363495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.363528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.363571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.054 [2024-07-13 00:39:00.363611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.363650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.363699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.364986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.365994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.366994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.367961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.055 [2024-07-13 00:39:00.368817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.368859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.368905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.368945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.368987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.369893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.370672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.370722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.370763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.370803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.370847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.370887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.370929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.370965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.371973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.372980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.373977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.374018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.374055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.374098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.374137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.374169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.374208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.374252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.374289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.056 [2024-07-13 00:39:00.374325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.374988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.375995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.376042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.376087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.376134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.376178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.376230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.376272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.376303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.376349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.377955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.378977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.379023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.379065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.379109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.379152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.379200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.379248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.379296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.379340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.379385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.379430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.379476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.379519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.379565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.057 [2024-07-13 00:39:00.379614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.379660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.379709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.379757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.379801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.379847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.380981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.381968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.382740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.383535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.383584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.383629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.383673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.383715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.383760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.383810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.383853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.383898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.383947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.383994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.384990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.058 [2024-07-13 00:39:00.385027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.385955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.386991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.387035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.387084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.387136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.387190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.387666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.387710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.387750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.387788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.387832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.387871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.387912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.387950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.387994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.388974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.389014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.389049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.389095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.389144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.389191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.389244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.389293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.389338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.389384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.389434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.389476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.389522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.389567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.059 [2024-07-13 00:39:00.389622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.389665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.389708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.389757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.389800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.389838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.389873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.389916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.389960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.390975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.391981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.392966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.393011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.393065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.393112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.393155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.393194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.393238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.393280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.394976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.395018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.395074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.395120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.395167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.060 [2024-07-13 00:39:00.395215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.395971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.396914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.397748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.398964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.399972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.400010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.400045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.400085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.400124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.400163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.400206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.400253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.400299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.061 [2024-07-13 00:39:00.400338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.400993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.401991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.402967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.403931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.404823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.404878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.404930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.404978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.405960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.062 [2024-07-13 00:39:00.406011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.406974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.407970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.408977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.063 [2024-07-13 00:39:00.409970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.410014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.410061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.410108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.410157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.410198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.410249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.410300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.410345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.410394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.410438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.410486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.410532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.411361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.411412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.411463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.411505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.411553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.411603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.411654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.411695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.411742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.411793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.411835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.411882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.411931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.411973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.412984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.413973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 Message suppressed 999 times: [2024-07-13 00:39:00.414307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 Read completed with error (sct=0, sc=15) 00:13:49.064 [2024-07-13 00:39:00.414355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.414961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.415012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.415531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.415583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.415622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.415660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.415704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.415746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.415789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.415833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.415872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.415912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.415953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.064 [2024-07-13 00:39:00.415985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.416957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.417985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.418992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.419982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.065 [2024-07-13 00:39:00.420768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.420809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.420851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.420891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.420939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.420977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.421012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.421055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.421094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.421136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.421175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.422981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.423992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.424982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.425958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.426003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.426051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.426097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.426142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.426188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.426240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.426287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.426333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.066 [2024-07-13 00:39:00.426377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.426417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.426461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.426507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.426550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.426585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.426625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.426667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.426706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.426748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.426792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.426840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.426882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.426918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.426962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.427691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.428559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.428609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.428655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.428697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.428741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.428789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.428836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.428885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.428929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.428972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.429981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.430974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.067 [2024-07-13 00:39:00.431020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.431971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.432017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.432064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.432112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.432155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.432201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.432252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.432753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.432806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.432853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.432895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.432943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.432984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.433968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.434962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.435970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.436011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.436042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.436086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.436124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.436163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.436205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.436259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.436306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.436349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.436386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.068 [2024-07-13 00:39:00.436433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.436469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.436515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.436555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.436599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.436645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.436684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.436723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.436762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.436805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.436845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.436888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.436931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.436970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.437972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.438019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.438063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.438108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.438157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.438202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.438254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.438296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.438341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.438384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.439994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.440963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.069 [2024-07-13 00:39:00.441756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.441800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.441841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.441888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.442810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.443971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.444980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.445991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.446983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.447038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.447079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.070 [2024-07-13 00:39:00.447116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.447995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.448027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.448061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.448101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.448139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.448701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.448749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.448801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.448846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.448889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.448931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.448976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 true 00:13:49.071 [2024-07-13 00:39:00.449662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.449984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.450020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.450061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.450098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 [2024-07-13 00:39:00.450143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:49.071 00:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:49.071 00:39:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.006 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.265 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:50.265 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:50.523 true 00:13:50.523 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:50.523 00:39:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.523 00:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.829 00:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:50.829 00:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:51.094 true 00:13:51.094 00:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:51.094 00:39:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.468 00:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:52.468 00:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:52.468 00:39:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:52.468 true 00:13:52.468 00:39:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:52.468 00:39:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.405 00:39:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.664 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:53.664 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:53.664 true 00:13:53.922 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:53.922 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.922 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.181 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:54.181 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:54.439 true 00:13:54.439 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:54.439 00:39:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.814 00:39:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.814 00:39:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:55.814 00:39:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:56.072 true 00:13:56.072 00:39:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:56.072 00:39:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.897 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.897 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:56.897 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:57.156 true 00:13:57.156 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:57.156 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.415 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.673 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:13:57.673 00:39:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:13:57.673 true 00:13:57.673 00:39:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:57.673 00:39:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.049 Initializing NVMe Controllers 00:13:59.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:59.049 Controller IO queue size 128, less than required. 00:13:59.049 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:59.049 Controller IO queue size 128, less than required. 00:13:59.049 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:59.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:59.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:59.050 Initialization complete. Launching workers. 00:13:59.050 ======================================================== 00:13:59.050 Latency(us) 00:13:59.050 Device Information : IOPS MiB/s Average min max 00:13:59.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2400.90 1.17 34146.82 2200.68 1012150.73 00:13:59.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15993.60 7.81 8002.97 2545.06 384872.24 00:13:59.050 ======================================================== 00:13:59.050 Total : 18394.50 8.98 11415.34 2200.68 1012150.73 00:13:59.050 00:13:59.050 00:39:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.050 00:39:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:13:59.050 00:39:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:13:59.308 true 00:13:59.308 00:39:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1304769 00:13:59.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1304769) - No such process 00:13:59.308 00:39:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1304769 00:13:59.308 00:39:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.566 00:39:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.566 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:59.566 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:59.566 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:59.566 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:59.567 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:59.825 null0 00:13:59.825 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:59.825 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:59.825 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:00.083 null1 00:14:00.083 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.083 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.083 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:00.083 null2 00:14:00.083 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.083 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.083 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:00.341 null3 00:14:00.341 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.341 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.341 00:39:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:00.599 null4 00:14:00.599 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.599 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.599 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:00.857 null5 00:14:00.857 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.857 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.857 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:00.857 null6 00:14:00.857 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.857 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.857 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:01.116 null7 00:14:01.116 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:01.116 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:01.116 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:01.116 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.116 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.116 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:01.116 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.116 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:01.116 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.116 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1310869 1310871 1310873 1310875 1310877 1310879 1310881 1310883 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.117 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:01.375 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:01.375 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.375 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:01.375 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:01.375 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:01.375 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:01.375 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:01.375 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:01.633 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.634 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.634 00:39:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:01.634 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:01.634 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.634 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:01.634 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:01.634 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:01.634 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:01.634 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:01.634 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.892 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:02.150 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.150 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:02.150 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:02.150 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.150 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:02.150 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.150 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:02.150 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:02.409 00:39:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.668 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:02.927 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.927 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:02.927 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:02.927 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:02.927 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.927 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.927 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:02.927 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.928 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.928 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.928 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:02.928 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.928 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.928 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:03.186 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.187 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:03.187 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.187 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:03.187 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.445 00:39:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:03.704 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.963 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.963 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.963 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.963 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:03.963 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:03.963 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.963 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.963 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:03.963 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.963 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:03.963 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.963 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:03.963 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.223 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.482 00:39:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.482 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:04.482 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.482 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.482 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:04.482 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.482 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.482 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:04.482 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.482 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.482 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:04.741 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:04.741 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:04.741 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:04.741 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:04.741 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.741 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:04.741 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:04.741 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:04.999 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.999 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.000 rmmod nvme_tcp 00:14:05.000 rmmod nvme_fabrics 00:14:05.000 rmmod nvme_keyring 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1304279 ']' 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1304279 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1304279 ']' 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1304279 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1304279 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1304279' 00:14:05.000 killing process with pid 1304279 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1304279 00:14:05.000 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1304279 00:14:05.257 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:05.257 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:05.257 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:05.257 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.257 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.257 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.257 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.257 00:39:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.788 00:39:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:07.788 00:14:07.788 real 0m47.643s 00:14:07.788 user 3m13.020s 00:14:07.788 sys 0m15.224s 00:14:07.788 00:39:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.788 00:39:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.788 ************************************ 00:14:07.788 END TEST nvmf_ns_hotplug_stress 00:14:07.788 ************************************ 00:14:07.788 00:39:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:07.788 00:39:18 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:07.788 00:39:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:07.788 00:39:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.788 00:39:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:07.788 ************************************ 00:14:07.788 START TEST nvmf_connect_stress 00:14:07.788 ************************************ 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:07.788 * Looking for test storage... 00:14:07.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:07.788 00:39:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:13.065 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:13.066 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:13.066 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:13.066 Found net devices under 0000:86:00.0: cvl_0_0 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:13.066 Found net devices under 0000:86:00.1: cvl_0_1 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:13.066 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:13.325 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:13.325 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:13.325 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:13.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:14:13.326 00:14:13.326 --- 10.0.0.2 ping statistics --- 00:14:13.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.326 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:13.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:14:13.326 00:14:13.326 --- 10.0.0.1 ping statistics --- 00:14:13.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.326 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1315235 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1315235 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1315235 ']' 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.326 00:39:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.326 [2024-07-13 00:39:24.787190] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:13.326 [2024-07-13 00:39:24.787236] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.326 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.326 [2024-07-13 00:39:24.855397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:13.585 [2024-07-13 00:39:24.895946] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.585 [2024-07-13 00:39:24.895983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.585 [2024-07-13 00:39:24.895990] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.585 [2024-07-13 00:39:24.895996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.585 [2024-07-13 00:39:24.896000] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.585 [2024-07-13 00:39:24.896112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.585 [2024-07-13 00:39:24.896253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.585 [2024-07-13 00:39:24.896254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.585 00:39:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.585 00:39:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:14:13.585 00:39:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.585 00:39:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.585 00:39:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.585 [2024-07-13 00:39:25.020827] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.585 [2024-07-13 00:39:25.051365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.585 NULL1 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1315263 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.585 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.844 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.844 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.844 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:13.844 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:13.844 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:13.844 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.844 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.844 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.103 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.103 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:14.103 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.103 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.103 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.362 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.362 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:14.362 00:39:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.362 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.362 00:39:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.641 00:39:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.641 00:39:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:14.641 00:39:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.641 00:39:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.641 00:39:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.911 00:39:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.911 00:39:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:14.911 00:39:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.911 00:39:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.911 00:39:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.476 00:39:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.476 00:39:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:15.476 00:39:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.476 00:39:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.476 00:39:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.734 00:39:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.734 00:39:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:15.734 00:39:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.734 00:39:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.734 00:39:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.993 00:39:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.993 00:39:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:15.993 00:39:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.993 00:39:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.993 00:39:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.251 00:39:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.251 00:39:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:16.251 00:39:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.251 00:39:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.251 00:39:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.510 00:39:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.510 00:39:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:16.510 00:39:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.510 00:39:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.510 00:39:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.077 00:39:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.077 00:39:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:17.077 00:39:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.077 00:39:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.077 00:39:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.336 00:39:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.336 00:39:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:17.336 00:39:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.336 00:39:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.336 00:39:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.594 00:39:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.595 00:39:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:17.595 00:39:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.595 00:39:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.595 00:39:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.853 00:39:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.853 00:39:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:17.853 00:39:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.853 00:39:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.853 00:39:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.111 00:39:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.111 00:39:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:18.111 00:39:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.111 00:39:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.111 00:39:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.678 00:39:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.678 00:39:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:18.678 00:39:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.678 00:39:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.678 00:39:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.936 00:39:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.936 00:39:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:18.936 00:39:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.936 00:39:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.936 00:39:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.195 00:39:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.195 00:39:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:19.195 00:39:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.195 00:39:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.195 00:39:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.453 00:39:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.453 00:39:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:19.453 00:39:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.453 00:39:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.453 00:39:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.020 00:39:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.020 00:39:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:20.020 00:39:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.020 00:39:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.020 00:39:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.278 00:39:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.278 00:39:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:20.278 00:39:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.278 00:39:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.278 00:39:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.537 00:39:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.537 00:39:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:20.537 00:39:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.537 00:39:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.537 00:39:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.795 00:39:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.795 00:39:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:20.795 00:39:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.795 00:39:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.795 00:39:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.055 00:39:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.055 00:39:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:21.055 00:39:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.055 00:39:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.055 00:39:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.620 00:39:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.620 00:39:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:21.620 00:39:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.620 00:39:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.620 00:39:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.877 00:39:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.877 00:39:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:21.877 00:39:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.877 00:39:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.877 00:39:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.135 00:39:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.135 00:39:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:22.135 00:39:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.135 00:39:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.135 00:39:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.394 00:39:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.394 00:39:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:22.394 00:39:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.394 00:39:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.394 00:39:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.651 00:39:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.651 00:39:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:22.651 00:39:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.651 00:39:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.651 00:39:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.217 00:39:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.217 00:39:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:23.217 00:39:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.217 00:39:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.217 00:39:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.475 00:39:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.475 00:39:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:23.475 00:39:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.475 00:39:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.475 00:39:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.733 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.733 00:39:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:23.733 00:39:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.733 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.733 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.733 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1315263 00:14:23.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1315263) - No such process 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1315263 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:23.992 rmmod nvme_tcp 00:14:23.992 rmmod nvme_fabrics 00:14:23.992 rmmod nvme_keyring 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1315235 ']' 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1315235 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1315235 ']' 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1315235 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:14:23.992 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:24.250 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1315235 00:14:24.250 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:24.250 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:24.250 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1315235' 00:14:24.250 killing process with pid 1315235 00:14:24.250 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1315235 00:14:24.250 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1315235 00:14:24.250 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:24.250 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:24.250 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:24.250 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:24.250 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:24.250 00:39:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.250 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.250 00:39:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.786 00:39:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:26.786 00:14:26.786 real 0m19.001s 00:14:26.786 user 0m40.174s 00:14:26.786 sys 0m8.063s 00:14:26.786 00:39:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:26.786 00:39:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.786 ************************************ 00:14:26.786 END TEST nvmf_connect_stress 00:14:26.786 ************************************ 00:14:26.786 00:39:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:26.786 00:39:37 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:26.786 00:39:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:26.786 00:39:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.786 00:39:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:26.786 ************************************ 00:14:26.786 START TEST nvmf_fused_ordering 00:14:26.786 ************************************ 00:14:26.786 00:39:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:26.786 * Looking for test storage... 00:14:26.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.786 00:39:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.786 00:39:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:26.786 00:39:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.786 00:39:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.786 00:39:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.786 00:39:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.786 00:39:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.786 00:39:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.786 00:39:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.786 00:39:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.786 00:39:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.786 00:39:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:26.786 00:39:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:32.063 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:32.063 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:32.063 Found net devices under 0000:86:00.0: cvl_0_0 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:32.063 Found net devices under 0000:86:00.1: cvl_0_1 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.063 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:32.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:14:32.322 00:14:32.322 --- 10.0.0.2 ping statistics --- 00:14:32.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.322 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:32.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:14:32.322 00:14:32.322 --- 10.0.0.1 ping statistics --- 00:14:32.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.322 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1320427 00:14:32.322 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1320427 00:14:32.323 00:39:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:32.323 00:39:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1320427 ']' 00:14:32.323 00:39:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.323 00:39:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.323 00:39:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.323 00:39:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.323 00:39:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.582 [2024-07-13 00:39:43.899215] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:32.582 [2024-07-13 00:39:43.899272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.582 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.582 [2024-07-13 00:39:43.970170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.582 [2024-07-13 00:39:44.010014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.582 [2024-07-13 00:39:44.010051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.582 [2024-07-13 00:39:44.010058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.582 [2024-07-13 00:39:44.010064] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.582 [2024-07-13 00:39:44.010069] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.582 [2024-07-13 00:39:44.010102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.582 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.582 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:14:32.582 00:39:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.582 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.582 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.582 00:39:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.582 00:39:44 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:32.582 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.582 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.582 [2024-07-13 00:39:44.133830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.582 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.582 00:39:44 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:32.582 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.582 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.841 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.841 00:39:44 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.841 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.841 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.841 [2024-07-13 00:39:44.153982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.841 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.841 00:39:44 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:32.841 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.841 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.841 NULL1 00:14:32.841 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.841 00:39:44 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:32.841 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.842 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.842 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.842 00:39:44 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:32.842 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.842 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.842 00:39:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.842 00:39:44 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:32.842 [2024-07-13 00:39:44.206499] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:32.842 [2024-07-13 00:39:44.206529] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1320522 ] 00:14:32.842 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.101 Attached to nqn.2016-06.io.spdk:cnode1 00:14:33.101 Namespace ID: 1 size: 1GB 00:14:33.101 fused_ordering(0) 00:14:33.101 fused_ordering(1) 00:14:33.101 fused_ordering(2) 00:14:33.101 fused_ordering(3) 00:14:33.101 fused_ordering(4) 00:14:33.101 fused_ordering(5) 00:14:33.101 fused_ordering(6) 00:14:33.101 fused_ordering(7) 00:14:33.101 fused_ordering(8) 00:14:33.101 fused_ordering(9) 00:14:33.101 fused_ordering(10) 00:14:33.101 fused_ordering(11) 00:14:33.101 fused_ordering(12) 00:14:33.101 fused_ordering(13) 00:14:33.101 fused_ordering(14) 00:14:33.101 fused_ordering(15) 00:14:33.101 fused_ordering(16) 00:14:33.101 fused_ordering(17) 00:14:33.101 fused_ordering(18) 00:14:33.101 fused_ordering(19) 00:14:33.101 fused_ordering(20) 00:14:33.101 fused_ordering(21) 00:14:33.101 fused_ordering(22) 00:14:33.101 fused_ordering(23) 00:14:33.101 fused_ordering(24) 00:14:33.101 fused_ordering(25) 00:14:33.101 fused_ordering(26) 00:14:33.101 fused_ordering(27) 00:14:33.101 fused_ordering(28) 00:14:33.101 fused_ordering(29) 00:14:33.101 fused_ordering(30) 00:14:33.101 fused_ordering(31) 00:14:33.101 fused_ordering(32) 00:14:33.101 fused_ordering(33) 00:14:33.101 fused_ordering(34) 00:14:33.101 fused_ordering(35) 00:14:33.101 fused_ordering(36) 00:14:33.101 fused_ordering(37) 00:14:33.101 fused_ordering(38) 00:14:33.101 fused_ordering(39) 00:14:33.101 fused_ordering(40) 00:14:33.101 fused_ordering(41) 00:14:33.101 fused_ordering(42) 00:14:33.101 fused_ordering(43) 00:14:33.101 fused_ordering(44) 00:14:33.101 fused_ordering(45) 00:14:33.101 fused_ordering(46) 00:14:33.101 fused_ordering(47) 00:14:33.101 fused_ordering(48) 00:14:33.101 fused_ordering(49) 00:14:33.101 fused_ordering(50) 00:14:33.101 fused_ordering(51) 00:14:33.101 fused_ordering(52) 00:14:33.101 fused_ordering(53) 00:14:33.101 fused_ordering(54) 00:14:33.101 fused_ordering(55) 00:14:33.101 fused_ordering(56) 00:14:33.101 fused_ordering(57) 00:14:33.101 fused_ordering(58) 00:14:33.101 fused_ordering(59) 00:14:33.101 fused_ordering(60) 00:14:33.101 fused_ordering(61) 00:14:33.101 fused_ordering(62) 00:14:33.101 fused_ordering(63) 00:14:33.101 fused_ordering(64) 00:14:33.101 fused_ordering(65) 00:14:33.101 fused_ordering(66) 00:14:33.101 fused_ordering(67) 00:14:33.101 fused_ordering(68) 00:14:33.101 fused_ordering(69) 00:14:33.101 fused_ordering(70) 00:14:33.101 fused_ordering(71) 00:14:33.101 fused_ordering(72) 00:14:33.101 fused_ordering(73) 00:14:33.101 fused_ordering(74) 00:14:33.101 fused_ordering(75) 00:14:33.101 fused_ordering(76) 00:14:33.101 fused_ordering(77) 00:14:33.101 fused_ordering(78) 00:14:33.101 fused_ordering(79) 00:14:33.101 fused_ordering(80) 00:14:33.101 fused_ordering(81) 00:14:33.101 fused_ordering(82) 00:14:33.101 fused_ordering(83) 00:14:33.101 fused_ordering(84) 00:14:33.101 fused_ordering(85) 00:14:33.101 fused_ordering(86) 00:14:33.101 fused_ordering(87) 00:14:33.101 fused_ordering(88) 00:14:33.101 fused_ordering(89) 00:14:33.101 fused_ordering(90) 00:14:33.101 fused_ordering(91) 00:14:33.101 fused_ordering(92) 00:14:33.101 fused_ordering(93) 00:14:33.101 fused_ordering(94) 00:14:33.101 fused_ordering(95) 00:14:33.101 fused_ordering(96) 00:14:33.101 fused_ordering(97) 00:14:33.101 fused_ordering(98) 00:14:33.101 fused_ordering(99) 00:14:33.101 fused_ordering(100) 00:14:33.101 fused_ordering(101) 00:14:33.101 fused_ordering(102) 00:14:33.101 fused_ordering(103) 00:14:33.101 fused_ordering(104) 00:14:33.101 fused_ordering(105) 00:14:33.101 fused_ordering(106) 00:14:33.101 fused_ordering(107) 00:14:33.101 fused_ordering(108) 00:14:33.101 fused_ordering(109) 00:14:33.101 fused_ordering(110) 00:14:33.101 fused_ordering(111) 00:14:33.101 fused_ordering(112) 00:14:33.101 fused_ordering(113) 00:14:33.101 fused_ordering(114) 00:14:33.101 fused_ordering(115) 00:14:33.101 fused_ordering(116) 00:14:33.101 fused_ordering(117) 00:14:33.101 fused_ordering(118) 00:14:33.101 fused_ordering(119) 00:14:33.101 fused_ordering(120) 00:14:33.101 fused_ordering(121) 00:14:33.101 fused_ordering(122) 00:14:33.101 fused_ordering(123) 00:14:33.101 fused_ordering(124) 00:14:33.101 fused_ordering(125) 00:14:33.101 fused_ordering(126) 00:14:33.101 fused_ordering(127) 00:14:33.101 fused_ordering(128) 00:14:33.101 fused_ordering(129) 00:14:33.101 fused_ordering(130) 00:14:33.101 fused_ordering(131) 00:14:33.101 fused_ordering(132) 00:14:33.101 fused_ordering(133) 00:14:33.101 fused_ordering(134) 00:14:33.101 fused_ordering(135) 00:14:33.101 fused_ordering(136) 00:14:33.101 fused_ordering(137) 00:14:33.102 fused_ordering(138) 00:14:33.102 fused_ordering(139) 00:14:33.102 fused_ordering(140) 00:14:33.102 fused_ordering(141) 00:14:33.102 fused_ordering(142) 00:14:33.102 fused_ordering(143) 00:14:33.102 fused_ordering(144) 00:14:33.102 fused_ordering(145) 00:14:33.102 fused_ordering(146) 00:14:33.102 fused_ordering(147) 00:14:33.102 fused_ordering(148) 00:14:33.102 fused_ordering(149) 00:14:33.102 fused_ordering(150) 00:14:33.102 fused_ordering(151) 00:14:33.102 fused_ordering(152) 00:14:33.102 fused_ordering(153) 00:14:33.102 fused_ordering(154) 00:14:33.102 fused_ordering(155) 00:14:33.102 fused_ordering(156) 00:14:33.102 fused_ordering(157) 00:14:33.102 fused_ordering(158) 00:14:33.102 fused_ordering(159) 00:14:33.102 fused_ordering(160) 00:14:33.102 fused_ordering(161) 00:14:33.102 fused_ordering(162) 00:14:33.102 fused_ordering(163) 00:14:33.102 fused_ordering(164) 00:14:33.102 fused_ordering(165) 00:14:33.102 fused_ordering(166) 00:14:33.102 fused_ordering(167) 00:14:33.102 fused_ordering(168) 00:14:33.102 fused_ordering(169) 00:14:33.102 fused_ordering(170) 00:14:33.102 fused_ordering(171) 00:14:33.102 fused_ordering(172) 00:14:33.102 fused_ordering(173) 00:14:33.102 fused_ordering(174) 00:14:33.102 fused_ordering(175) 00:14:33.102 fused_ordering(176) 00:14:33.102 fused_ordering(177) 00:14:33.102 fused_ordering(178) 00:14:33.102 fused_ordering(179) 00:14:33.102 fused_ordering(180) 00:14:33.102 fused_ordering(181) 00:14:33.102 fused_ordering(182) 00:14:33.102 fused_ordering(183) 00:14:33.102 fused_ordering(184) 00:14:33.102 fused_ordering(185) 00:14:33.102 fused_ordering(186) 00:14:33.102 fused_ordering(187) 00:14:33.102 fused_ordering(188) 00:14:33.102 fused_ordering(189) 00:14:33.102 fused_ordering(190) 00:14:33.102 fused_ordering(191) 00:14:33.102 fused_ordering(192) 00:14:33.102 fused_ordering(193) 00:14:33.102 fused_ordering(194) 00:14:33.102 fused_ordering(195) 00:14:33.102 fused_ordering(196) 00:14:33.102 fused_ordering(197) 00:14:33.102 fused_ordering(198) 00:14:33.102 fused_ordering(199) 00:14:33.102 fused_ordering(200) 00:14:33.102 fused_ordering(201) 00:14:33.102 fused_ordering(202) 00:14:33.102 fused_ordering(203) 00:14:33.102 fused_ordering(204) 00:14:33.102 fused_ordering(205) 00:14:33.361 fused_ordering(206) 00:14:33.361 fused_ordering(207) 00:14:33.361 fused_ordering(208) 00:14:33.361 fused_ordering(209) 00:14:33.361 fused_ordering(210) 00:14:33.361 fused_ordering(211) 00:14:33.361 fused_ordering(212) 00:14:33.361 fused_ordering(213) 00:14:33.361 fused_ordering(214) 00:14:33.361 fused_ordering(215) 00:14:33.361 fused_ordering(216) 00:14:33.361 fused_ordering(217) 00:14:33.361 fused_ordering(218) 00:14:33.361 fused_ordering(219) 00:14:33.361 fused_ordering(220) 00:14:33.361 fused_ordering(221) 00:14:33.361 fused_ordering(222) 00:14:33.361 fused_ordering(223) 00:14:33.361 fused_ordering(224) 00:14:33.361 fused_ordering(225) 00:14:33.361 fused_ordering(226) 00:14:33.361 fused_ordering(227) 00:14:33.361 fused_ordering(228) 00:14:33.361 fused_ordering(229) 00:14:33.361 fused_ordering(230) 00:14:33.361 fused_ordering(231) 00:14:33.361 fused_ordering(232) 00:14:33.361 fused_ordering(233) 00:14:33.361 fused_ordering(234) 00:14:33.361 fused_ordering(235) 00:14:33.361 fused_ordering(236) 00:14:33.361 fused_ordering(237) 00:14:33.361 fused_ordering(238) 00:14:33.361 fused_ordering(239) 00:14:33.361 fused_ordering(240) 00:14:33.361 fused_ordering(241) 00:14:33.361 fused_ordering(242) 00:14:33.361 fused_ordering(243) 00:14:33.361 fused_ordering(244) 00:14:33.361 fused_ordering(245) 00:14:33.361 fused_ordering(246) 00:14:33.361 fused_ordering(247) 00:14:33.361 fused_ordering(248) 00:14:33.361 fused_ordering(249) 00:14:33.361 fused_ordering(250) 00:14:33.361 fused_ordering(251) 00:14:33.361 fused_ordering(252) 00:14:33.361 fused_ordering(253) 00:14:33.361 fused_ordering(254) 00:14:33.361 fused_ordering(255) 00:14:33.361 fused_ordering(256) 00:14:33.361 fused_ordering(257) 00:14:33.361 fused_ordering(258) 00:14:33.361 fused_ordering(259) 00:14:33.361 fused_ordering(260) 00:14:33.361 fused_ordering(261) 00:14:33.361 fused_ordering(262) 00:14:33.361 fused_ordering(263) 00:14:33.361 fused_ordering(264) 00:14:33.361 fused_ordering(265) 00:14:33.361 fused_ordering(266) 00:14:33.361 fused_ordering(267) 00:14:33.361 fused_ordering(268) 00:14:33.361 fused_ordering(269) 00:14:33.361 fused_ordering(270) 00:14:33.361 fused_ordering(271) 00:14:33.361 fused_ordering(272) 00:14:33.361 fused_ordering(273) 00:14:33.361 fused_ordering(274) 00:14:33.361 fused_ordering(275) 00:14:33.361 fused_ordering(276) 00:14:33.361 fused_ordering(277) 00:14:33.361 fused_ordering(278) 00:14:33.361 fused_ordering(279) 00:14:33.361 fused_ordering(280) 00:14:33.361 fused_ordering(281) 00:14:33.361 fused_ordering(282) 00:14:33.361 fused_ordering(283) 00:14:33.361 fused_ordering(284) 00:14:33.361 fused_ordering(285) 00:14:33.361 fused_ordering(286) 00:14:33.361 fused_ordering(287) 00:14:33.361 fused_ordering(288) 00:14:33.361 fused_ordering(289) 00:14:33.361 fused_ordering(290) 00:14:33.361 fused_ordering(291) 00:14:33.361 fused_ordering(292) 00:14:33.361 fused_ordering(293) 00:14:33.361 fused_ordering(294) 00:14:33.361 fused_ordering(295) 00:14:33.361 fused_ordering(296) 00:14:33.361 fused_ordering(297) 00:14:33.361 fused_ordering(298) 00:14:33.361 fused_ordering(299) 00:14:33.361 fused_ordering(300) 00:14:33.361 fused_ordering(301) 00:14:33.361 fused_ordering(302) 00:14:33.361 fused_ordering(303) 00:14:33.361 fused_ordering(304) 00:14:33.361 fused_ordering(305) 00:14:33.361 fused_ordering(306) 00:14:33.361 fused_ordering(307) 00:14:33.361 fused_ordering(308) 00:14:33.361 fused_ordering(309) 00:14:33.361 fused_ordering(310) 00:14:33.361 fused_ordering(311) 00:14:33.361 fused_ordering(312) 00:14:33.361 fused_ordering(313) 00:14:33.361 fused_ordering(314) 00:14:33.361 fused_ordering(315) 00:14:33.361 fused_ordering(316) 00:14:33.361 fused_ordering(317) 00:14:33.361 fused_ordering(318) 00:14:33.361 fused_ordering(319) 00:14:33.361 fused_ordering(320) 00:14:33.361 fused_ordering(321) 00:14:33.361 fused_ordering(322) 00:14:33.361 fused_ordering(323) 00:14:33.361 fused_ordering(324) 00:14:33.361 fused_ordering(325) 00:14:33.361 fused_ordering(326) 00:14:33.361 fused_ordering(327) 00:14:33.361 fused_ordering(328) 00:14:33.361 fused_ordering(329) 00:14:33.361 fused_ordering(330) 00:14:33.361 fused_ordering(331) 00:14:33.361 fused_ordering(332) 00:14:33.361 fused_ordering(333) 00:14:33.361 fused_ordering(334) 00:14:33.361 fused_ordering(335) 00:14:33.361 fused_ordering(336) 00:14:33.361 fused_ordering(337) 00:14:33.361 fused_ordering(338) 00:14:33.361 fused_ordering(339) 00:14:33.361 fused_ordering(340) 00:14:33.361 fused_ordering(341) 00:14:33.361 fused_ordering(342) 00:14:33.361 fused_ordering(343) 00:14:33.361 fused_ordering(344) 00:14:33.361 fused_ordering(345) 00:14:33.361 fused_ordering(346) 00:14:33.361 fused_ordering(347) 00:14:33.361 fused_ordering(348) 00:14:33.361 fused_ordering(349) 00:14:33.361 fused_ordering(350) 00:14:33.362 fused_ordering(351) 00:14:33.362 fused_ordering(352) 00:14:33.362 fused_ordering(353) 00:14:33.362 fused_ordering(354) 00:14:33.362 fused_ordering(355) 00:14:33.362 fused_ordering(356) 00:14:33.362 fused_ordering(357) 00:14:33.362 fused_ordering(358) 00:14:33.362 fused_ordering(359) 00:14:33.362 fused_ordering(360) 00:14:33.362 fused_ordering(361) 00:14:33.362 fused_ordering(362) 00:14:33.362 fused_ordering(363) 00:14:33.362 fused_ordering(364) 00:14:33.362 fused_ordering(365) 00:14:33.362 fused_ordering(366) 00:14:33.362 fused_ordering(367) 00:14:33.362 fused_ordering(368) 00:14:33.362 fused_ordering(369) 00:14:33.362 fused_ordering(370) 00:14:33.362 fused_ordering(371) 00:14:33.362 fused_ordering(372) 00:14:33.362 fused_ordering(373) 00:14:33.362 fused_ordering(374) 00:14:33.362 fused_ordering(375) 00:14:33.362 fused_ordering(376) 00:14:33.362 fused_ordering(377) 00:14:33.362 fused_ordering(378) 00:14:33.362 fused_ordering(379) 00:14:33.362 fused_ordering(380) 00:14:33.362 fused_ordering(381) 00:14:33.362 fused_ordering(382) 00:14:33.362 fused_ordering(383) 00:14:33.362 fused_ordering(384) 00:14:33.362 fused_ordering(385) 00:14:33.362 fused_ordering(386) 00:14:33.362 fused_ordering(387) 00:14:33.362 fused_ordering(388) 00:14:33.362 fused_ordering(389) 00:14:33.362 fused_ordering(390) 00:14:33.362 fused_ordering(391) 00:14:33.362 fused_ordering(392) 00:14:33.362 fused_ordering(393) 00:14:33.362 fused_ordering(394) 00:14:33.362 fused_ordering(395) 00:14:33.362 fused_ordering(396) 00:14:33.362 fused_ordering(397) 00:14:33.362 fused_ordering(398) 00:14:33.362 fused_ordering(399) 00:14:33.362 fused_ordering(400) 00:14:33.362 fused_ordering(401) 00:14:33.362 fused_ordering(402) 00:14:33.362 fused_ordering(403) 00:14:33.362 fused_ordering(404) 00:14:33.362 fused_ordering(405) 00:14:33.362 fused_ordering(406) 00:14:33.362 fused_ordering(407) 00:14:33.362 fused_ordering(408) 00:14:33.362 fused_ordering(409) 00:14:33.362 fused_ordering(410) 00:14:33.995 fused_ordering(411) 00:14:33.995 fused_ordering(412) 00:14:33.995 fused_ordering(413) 00:14:33.995 fused_ordering(414) 00:14:33.995 fused_ordering(415) 00:14:33.995 fused_ordering(416) 00:14:33.995 fused_ordering(417) 00:14:33.995 fused_ordering(418) 00:14:33.995 fused_ordering(419) 00:14:33.995 fused_ordering(420) 00:14:33.995 fused_ordering(421) 00:14:33.995 fused_ordering(422) 00:14:33.995 fused_ordering(423) 00:14:33.995 fused_ordering(424) 00:14:33.995 fused_ordering(425) 00:14:33.995 fused_ordering(426) 00:14:33.995 fused_ordering(427) 00:14:33.995 fused_ordering(428) 00:14:33.995 fused_ordering(429) 00:14:33.995 fused_ordering(430) 00:14:33.995 fused_ordering(431) 00:14:33.995 fused_ordering(432) 00:14:33.995 fused_ordering(433) 00:14:33.995 fused_ordering(434) 00:14:33.995 fused_ordering(435) 00:14:33.995 fused_ordering(436) 00:14:33.995 fused_ordering(437) 00:14:33.995 fused_ordering(438) 00:14:33.995 fused_ordering(439) 00:14:33.995 fused_ordering(440) 00:14:33.995 fused_ordering(441) 00:14:33.995 fused_ordering(442) 00:14:33.995 fused_ordering(443) 00:14:33.995 fused_ordering(444) 00:14:33.995 fused_ordering(445) 00:14:33.995 fused_ordering(446) 00:14:33.995 fused_ordering(447) 00:14:33.995 fused_ordering(448) 00:14:33.995 fused_ordering(449) 00:14:33.995 fused_ordering(450) 00:14:33.995 fused_ordering(451) 00:14:33.995 fused_ordering(452) 00:14:33.995 fused_ordering(453) 00:14:33.995 fused_ordering(454) 00:14:33.995 fused_ordering(455) 00:14:33.995 fused_ordering(456) 00:14:33.995 fused_ordering(457) 00:14:33.995 fused_ordering(458) 00:14:33.995 fused_ordering(459) 00:14:33.995 fused_ordering(460) 00:14:33.995 fused_ordering(461) 00:14:33.995 fused_ordering(462) 00:14:33.995 fused_ordering(463) 00:14:33.995 fused_ordering(464) 00:14:33.995 fused_ordering(465) 00:14:33.995 fused_ordering(466) 00:14:33.995 fused_ordering(467) 00:14:33.995 fused_ordering(468) 00:14:33.995 fused_ordering(469) 00:14:33.995 fused_ordering(470) 00:14:33.995 fused_ordering(471) 00:14:33.995 fused_ordering(472) 00:14:33.995 fused_ordering(473) 00:14:33.995 fused_ordering(474) 00:14:33.995 fused_ordering(475) 00:14:33.995 fused_ordering(476) 00:14:33.995 fused_ordering(477) 00:14:33.995 fused_ordering(478) 00:14:33.995 fused_ordering(479) 00:14:33.995 fused_ordering(480) 00:14:33.995 fused_ordering(481) 00:14:33.995 fused_ordering(482) 00:14:33.995 fused_ordering(483) 00:14:33.995 fused_ordering(484) 00:14:33.995 fused_ordering(485) 00:14:33.995 fused_ordering(486) 00:14:33.995 fused_ordering(487) 00:14:33.995 fused_ordering(488) 00:14:33.995 fused_ordering(489) 00:14:33.995 fused_ordering(490) 00:14:33.995 fused_ordering(491) 00:14:33.995 fused_ordering(492) 00:14:33.995 fused_ordering(493) 00:14:33.995 fused_ordering(494) 00:14:33.995 fused_ordering(495) 00:14:33.995 fused_ordering(496) 00:14:33.995 fused_ordering(497) 00:14:33.995 fused_ordering(498) 00:14:33.995 fused_ordering(499) 00:14:33.995 fused_ordering(500) 00:14:33.995 fused_ordering(501) 00:14:33.995 fused_ordering(502) 00:14:33.995 fused_ordering(503) 00:14:33.995 fused_ordering(504) 00:14:33.995 fused_ordering(505) 00:14:33.995 fused_ordering(506) 00:14:33.995 fused_ordering(507) 00:14:33.995 fused_ordering(508) 00:14:33.995 fused_ordering(509) 00:14:33.995 fused_ordering(510) 00:14:33.995 fused_ordering(511) 00:14:33.995 fused_ordering(512) 00:14:33.995 fused_ordering(513) 00:14:33.995 fused_ordering(514) 00:14:33.995 fused_ordering(515) 00:14:33.995 fused_ordering(516) 00:14:33.995 fused_ordering(517) 00:14:33.995 fused_ordering(518) 00:14:33.995 fused_ordering(519) 00:14:33.995 fused_ordering(520) 00:14:33.995 fused_ordering(521) 00:14:33.995 fused_ordering(522) 00:14:33.995 fused_ordering(523) 00:14:33.995 fused_ordering(524) 00:14:33.995 fused_ordering(525) 00:14:33.995 fused_ordering(526) 00:14:33.995 fused_ordering(527) 00:14:33.995 fused_ordering(528) 00:14:33.995 fused_ordering(529) 00:14:33.995 fused_ordering(530) 00:14:33.995 fused_ordering(531) 00:14:33.995 fused_ordering(532) 00:14:33.995 fused_ordering(533) 00:14:33.995 fused_ordering(534) 00:14:33.995 fused_ordering(535) 00:14:33.995 fused_ordering(536) 00:14:33.995 fused_ordering(537) 00:14:33.995 fused_ordering(538) 00:14:33.995 fused_ordering(539) 00:14:33.995 fused_ordering(540) 00:14:33.995 fused_ordering(541) 00:14:33.995 fused_ordering(542) 00:14:33.995 fused_ordering(543) 00:14:33.995 fused_ordering(544) 00:14:33.995 fused_ordering(545) 00:14:33.995 fused_ordering(546) 00:14:33.995 fused_ordering(547) 00:14:33.995 fused_ordering(548) 00:14:33.995 fused_ordering(549) 00:14:33.995 fused_ordering(550) 00:14:33.995 fused_ordering(551) 00:14:33.995 fused_ordering(552) 00:14:33.995 fused_ordering(553) 00:14:33.995 fused_ordering(554) 00:14:33.995 fused_ordering(555) 00:14:33.995 fused_ordering(556) 00:14:33.995 fused_ordering(557) 00:14:33.995 fused_ordering(558) 00:14:33.995 fused_ordering(559) 00:14:33.995 fused_ordering(560) 00:14:33.995 fused_ordering(561) 00:14:33.995 fused_ordering(562) 00:14:33.995 fused_ordering(563) 00:14:33.995 fused_ordering(564) 00:14:33.995 fused_ordering(565) 00:14:33.995 fused_ordering(566) 00:14:33.995 fused_ordering(567) 00:14:33.995 fused_ordering(568) 00:14:33.995 fused_ordering(569) 00:14:33.995 fused_ordering(570) 00:14:33.995 fused_ordering(571) 00:14:33.995 fused_ordering(572) 00:14:33.995 fused_ordering(573) 00:14:33.995 fused_ordering(574) 00:14:33.995 fused_ordering(575) 00:14:33.995 fused_ordering(576) 00:14:33.995 fused_ordering(577) 00:14:33.995 fused_ordering(578) 00:14:33.995 fused_ordering(579) 00:14:33.995 fused_ordering(580) 00:14:33.995 fused_ordering(581) 00:14:33.995 fused_ordering(582) 00:14:33.995 fused_ordering(583) 00:14:33.995 fused_ordering(584) 00:14:33.995 fused_ordering(585) 00:14:33.995 fused_ordering(586) 00:14:33.995 fused_ordering(587) 00:14:33.995 fused_ordering(588) 00:14:33.995 fused_ordering(589) 00:14:33.995 fused_ordering(590) 00:14:33.995 fused_ordering(591) 00:14:33.995 fused_ordering(592) 00:14:33.995 fused_ordering(593) 00:14:33.995 fused_ordering(594) 00:14:33.995 fused_ordering(595) 00:14:33.995 fused_ordering(596) 00:14:33.995 fused_ordering(597) 00:14:33.995 fused_ordering(598) 00:14:33.995 fused_ordering(599) 00:14:33.995 fused_ordering(600) 00:14:33.995 fused_ordering(601) 00:14:33.995 fused_ordering(602) 00:14:33.995 fused_ordering(603) 00:14:33.995 fused_ordering(604) 00:14:33.995 fused_ordering(605) 00:14:33.995 fused_ordering(606) 00:14:33.995 fused_ordering(607) 00:14:33.995 fused_ordering(608) 00:14:33.995 fused_ordering(609) 00:14:33.995 fused_ordering(610) 00:14:33.995 fused_ordering(611) 00:14:33.995 fused_ordering(612) 00:14:33.995 fused_ordering(613) 00:14:33.995 fused_ordering(614) 00:14:33.995 fused_ordering(615) 00:14:34.255 fused_ordering(616) 00:14:34.255 fused_ordering(617) 00:14:34.255 fused_ordering(618) 00:14:34.255 fused_ordering(619) 00:14:34.255 fused_ordering(620) 00:14:34.255 fused_ordering(621) 00:14:34.255 fused_ordering(622) 00:14:34.255 fused_ordering(623) 00:14:34.255 fused_ordering(624) 00:14:34.255 fused_ordering(625) 00:14:34.255 fused_ordering(626) 00:14:34.255 fused_ordering(627) 00:14:34.255 fused_ordering(628) 00:14:34.255 fused_ordering(629) 00:14:34.255 fused_ordering(630) 00:14:34.255 fused_ordering(631) 00:14:34.255 fused_ordering(632) 00:14:34.255 fused_ordering(633) 00:14:34.255 fused_ordering(634) 00:14:34.255 fused_ordering(635) 00:14:34.255 fused_ordering(636) 00:14:34.255 fused_ordering(637) 00:14:34.255 fused_ordering(638) 00:14:34.255 fused_ordering(639) 00:14:34.255 fused_ordering(640) 00:14:34.255 fused_ordering(641) 00:14:34.255 fused_ordering(642) 00:14:34.255 fused_ordering(643) 00:14:34.255 fused_ordering(644) 00:14:34.255 fused_ordering(645) 00:14:34.255 fused_ordering(646) 00:14:34.255 fused_ordering(647) 00:14:34.255 fused_ordering(648) 00:14:34.255 fused_ordering(649) 00:14:34.255 fused_ordering(650) 00:14:34.255 fused_ordering(651) 00:14:34.255 fused_ordering(652) 00:14:34.255 fused_ordering(653) 00:14:34.255 fused_ordering(654) 00:14:34.255 fused_ordering(655) 00:14:34.255 fused_ordering(656) 00:14:34.255 fused_ordering(657) 00:14:34.255 fused_ordering(658) 00:14:34.255 fused_ordering(659) 00:14:34.255 fused_ordering(660) 00:14:34.255 fused_ordering(661) 00:14:34.255 fused_ordering(662) 00:14:34.255 fused_ordering(663) 00:14:34.255 fused_ordering(664) 00:14:34.255 fused_ordering(665) 00:14:34.255 fused_ordering(666) 00:14:34.255 fused_ordering(667) 00:14:34.255 fused_ordering(668) 00:14:34.255 fused_ordering(669) 00:14:34.255 fused_ordering(670) 00:14:34.255 fused_ordering(671) 00:14:34.255 fused_ordering(672) 00:14:34.255 fused_ordering(673) 00:14:34.255 fused_ordering(674) 00:14:34.255 fused_ordering(675) 00:14:34.255 fused_ordering(676) 00:14:34.255 fused_ordering(677) 00:14:34.255 fused_ordering(678) 00:14:34.255 fused_ordering(679) 00:14:34.255 fused_ordering(680) 00:14:34.255 fused_ordering(681) 00:14:34.255 fused_ordering(682) 00:14:34.255 fused_ordering(683) 00:14:34.255 fused_ordering(684) 00:14:34.255 fused_ordering(685) 00:14:34.255 fused_ordering(686) 00:14:34.255 fused_ordering(687) 00:14:34.255 fused_ordering(688) 00:14:34.255 fused_ordering(689) 00:14:34.255 fused_ordering(690) 00:14:34.255 fused_ordering(691) 00:14:34.255 fused_ordering(692) 00:14:34.255 fused_ordering(693) 00:14:34.255 fused_ordering(694) 00:14:34.255 fused_ordering(695) 00:14:34.255 fused_ordering(696) 00:14:34.255 fused_ordering(697) 00:14:34.255 fused_ordering(698) 00:14:34.255 fused_ordering(699) 00:14:34.255 fused_ordering(700) 00:14:34.255 fused_ordering(701) 00:14:34.255 fused_ordering(702) 00:14:34.255 fused_ordering(703) 00:14:34.255 fused_ordering(704) 00:14:34.255 fused_ordering(705) 00:14:34.255 fused_ordering(706) 00:14:34.255 fused_ordering(707) 00:14:34.255 fused_ordering(708) 00:14:34.255 fused_ordering(709) 00:14:34.255 fused_ordering(710) 00:14:34.255 fused_ordering(711) 00:14:34.255 fused_ordering(712) 00:14:34.255 fused_ordering(713) 00:14:34.255 fused_ordering(714) 00:14:34.255 fused_ordering(715) 00:14:34.255 fused_ordering(716) 00:14:34.255 fused_ordering(717) 00:14:34.255 fused_ordering(718) 00:14:34.255 fused_ordering(719) 00:14:34.255 fused_ordering(720) 00:14:34.255 fused_ordering(721) 00:14:34.255 fused_ordering(722) 00:14:34.255 fused_ordering(723) 00:14:34.255 fused_ordering(724) 00:14:34.255 fused_ordering(725) 00:14:34.255 fused_ordering(726) 00:14:34.255 fused_ordering(727) 00:14:34.255 fused_ordering(728) 00:14:34.255 fused_ordering(729) 00:14:34.255 fused_ordering(730) 00:14:34.255 fused_ordering(731) 00:14:34.255 fused_ordering(732) 00:14:34.255 fused_ordering(733) 00:14:34.255 fused_ordering(734) 00:14:34.255 fused_ordering(735) 00:14:34.255 fused_ordering(736) 00:14:34.255 fused_ordering(737) 00:14:34.255 fused_ordering(738) 00:14:34.255 fused_ordering(739) 00:14:34.255 fused_ordering(740) 00:14:34.255 fused_ordering(741) 00:14:34.255 fused_ordering(742) 00:14:34.255 fused_ordering(743) 00:14:34.255 fused_ordering(744) 00:14:34.255 fused_ordering(745) 00:14:34.255 fused_ordering(746) 00:14:34.255 fused_ordering(747) 00:14:34.255 fused_ordering(748) 00:14:34.255 fused_ordering(749) 00:14:34.255 fused_ordering(750) 00:14:34.255 fused_ordering(751) 00:14:34.255 fused_ordering(752) 00:14:34.255 fused_ordering(753) 00:14:34.255 fused_ordering(754) 00:14:34.255 fused_ordering(755) 00:14:34.255 fused_ordering(756) 00:14:34.255 fused_ordering(757) 00:14:34.255 fused_ordering(758) 00:14:34.255 fused_ordering(759) 00:14:34.255 fused_ordering(760) 00:14:34.255 fused_ordering(761) 00:14:34.255 fused_ordering(762) 00:14:34.255 fused_ordering(763) 00:14:34.255 fused_ordering(764) 00:14:34.255 fused_ordering(765) 00:14:34.255 fused_ordering(766) 00:14:34.255 fused_ordering(767) 00:14:34.255 fused_ordering(768) 00:14:34.255 fused_ordering(769) 00:14:34.255 fused_ordering(770) 00:14:34.255 fused_ordering(771) 00:14:34.255 fused_ordering(772) 00:14:34.255 fused_ordering(773) 00:14:34.255 fused_ordering(774) 00:14:34.256 fused_ordering(775) 00:14:34.256 fused_ordering(776) 00:14:34.256 fused_ordering(777) 00:14:34.256 fused_ordering(778) 00:14:34.256 fused_ordering(779) 00:14:34.256 fused_ordering(780) 00:14:34.256 fused_ordering(781) 00:14:34.256 fused_ordering(782) 00:14:34.256 fused_ordering(783) 00:14:34.256 fused_ordering(784) 00:14:34.256 fused_ordering(785) 00:14:34.256 fused_ordering(786) 00:14:34.256 fused_ordering(787) 00:14:34.256 fused_ordering(788) 00:14:34.256 fused_ordering(789) 00:14:34.256 fused_ordering(790) 00:14:34.256 fused_ordering(791) 00:14:34.256 fused_ordering(792) 00:14:34.256 fused_ordering(793) 00:14:34.256 fused_ordering(794) 00:14:34.256 fused_ordering(795) 00:14:34.256 fused_ordering(796) 00:14:34.256 fused_ordering(797) 00:14:34.256 fused_ordering(798) 00:14:34.256 fused_ordering(799) 00:14:34.256 fused_ordering(800) 00:14:34.256 fused_ordering(801) 00:14:34.256 fused_ordering(802) 00:14:34.256 fused_ordering(803) 00:14:34.256 fused_ordering(804) 00:14:34.256 fused_ordering(805) 00:14:34.256 fused_ordering(806) 00:14:34.256 fused_ordering(807) 00:14:34.256 fused_ordering(808) 00:14:34.256 fused_ordering(809) 00:14:34.256 fused_ordering(810) 00:14:34.256 fused_ordering(811) 00:14:34.256 fused_ordering(812) 00:14:34.256 fused_ordering(813) 00:14:34.256 fused_ordering(814) 00:14:34.256 fused_ordering(815) 00:14:34.256 fused_ordering(816) 00:14:34.256 fused_ordering(817) 00:14:34.256 fused_ordering(818) 00:14:34.256 fused_ordering(819) 00:14:34.256 fused_ordering(820) 00:14:34.824 fused_ordering(821) 00:14:34.824 fused_ordering(822) 00:14:34.824 fused_ordering(823) 00:14:34.824 fused_ordering(824) 00:14:34.824 fused_ordering(825) 00:14:34.824 fused_ordering(826) 00:14:34.824 fused_ordering(827) 00:14:34.824 fused_ordering(828) 00:14:34.824 fused_ordering(829) 00:14:34.824 fused_ordering(830) 00:14:34.824 fused_ordering(831) 00:14:34.824 fused_ordering(832) 00:14:34.824 fused_ordering(833) 00:14:34.824 fused_ordering(834) 00:14:34.824 fused_ordering(835) 00:14:34.824 fused_ordering(836) 00:14:34.824 fused_ordering(837) 00:14:34.824 fused_ordering(838) 00:14:34.824 fused_ordering(839) 00:14:34.824 fused_ordering(840) 00:14:34.824 fused_ordering(841) 00:14:34.824 fused_ordering(842) 00:14:34.824 fused_ordering(843) 00:14:34.824 fused_ordering(844) 00:14:34.824 fused_ordering(845) 00:14:34.824 fused_ordering(846) 00:14:34.824 fused_ordering(847) 00:14:34.824 fused_ordering(848) 00:14:34.824 fused_ordering(849) 00:14:34.824 fused_ordering(850) 00:14:34.824 fused_ordering(851) 00:14:34.824 fused_ordering(852) 00:14:34.824 fused_ordering(853) 00:14:34.824 fused_ordering(854) 00:14:34.824 fused_ordering(855) 00:14:34.824 fused_ordering(856) 00:14:34.824 fused_ordering(857) 00:14:34.824 fused_ordering(858) 00:14:34.824 fused_ordering(859) 00:14:34.824 fused_ordering(860) 00:14:34.824 fused_ordering(861) 00:14:34.824 fused_ordering(862) 00:14:34.824 fused_ordering(863) 00:14:34.825 fused_ordering(864) 00:14:34.825 fused_ordering(865) 00:14:34.825 fused_ordering(866) 00:14:34.825 fused_ordering(867) 00:14:34.825 fused_ordering(868) 00:14:34.825 fused_ordering(869) 00:14:34.825 fused_ordering(870) 00:14:34.825 fused_ordering(871) 00:14:34.825 fused_ordering(872) 00:14:34.825 fused_ordering(873) 00:14:34.825 fused_ordering(874) 00:14:34.825 fused_ordering(875) 00:14:34.825 fused_ordering(876) 00:14:34.825 fused_ordering(877) 00:14:34.825 fused_ordering(878) 00:14:34.825 fused_ordering(879) 00:14:34.825 fused_ordering(880) 00:14:34.825 fused_ordering(881) 00:14:34.825 fused_ordering(882) 00:14:34.825 fused_ordering(883) 00:14:34.825 fused_ordering(884) 00:14:34.825 fused_ordering(885) 00:14:34.825 fused_ordering(886) 00:14:34.825 fused_ordering(887) 00:14:34.825 fused_ordering(888) 00:14:34.825 fused_ordering(889) 00:14:34.825 fused_ordering(890) 00:14:34.825 fused_ordering(891) 00:14:34.825 fused_ordering(892) 00:14:34.825 fused_ordering(893) 00:14:34.825 fused_ordering(894) 00:14:34.825 fused_ordering(895) 00:14:34.825 fused_ordering(896) 00:14:34.825 fused_ordering(897) 00:14:34.825 fused_ordering(898) 00:14:34.825 fused_ordering(899) 00:14:34.825 fused_ordering(900) 00:14:34.825 fused_ordering(901) 00:14:34.825 fused_ordering(902) 00:14:34.825 fused_ordering(903) 00:14:34.825 fused_ordering(904) 00:14:34.825 fused_ordering(905) 00:14:34.825 fused_ordering(906) 00:14:34.825 fused_ordering(907) 00:14:34.825 fused_ordering(908) 00:14:34.825 fused_ordering(909) 00:14:34.825 fused_ordering(910) 00:14:34.825 fused_ordering(911) 00:14:34.825 fused_ordering(912) 00:14:34.825 fused_ordering(913) 00:14:34.825 fused_ordering(914) 00:14:34.825 fused_ordering(915) 00:14:34.825 fused_ordering(916) 00:14:34.825 fused_ordering(917) 00:14:34.825 fused_ordering(918) 00:14:34.825 fused_ordering(919) 00:14:34.825 fused_ordering(920) 00:14:34.825 fused_ordering(921) 00:14:34.825 fused_ordering(922) 00:14:34.825 fused_ordering(923) 00:14:34.825 fused_ordering(924) 00:14:34.825 fused_ordering(925) 00:14:34.825 fused_ordering(926) 00:14:34.825 fused_ordering(927) 00:14:34.825 fused_ordering(928) 00:14:34.825 fused_ordering(929) 00:14:34.825 fused_ordering(930) 00:14:34.825 fused_ordering(931) 00:14:34.825 fused_ordering(932) 00:14:34.825 fused_ordering(933) 00:14:34.825 fused_ordering(934) 00:14:34.825 fused_ordering(935) 00:14:34.825 fused_ordering(936) 00:14:34.825 fused_ordering(937) 00:14:34.825 fused_ordering(938) 00:14:34.825 fused_ordering(939) 00:14:34.825 fused_ordering(940) 00:14:34.825 fused_ordering(941) 00:14:34.825 fused_ordering(942) 00:14:34.825 fused_ordering(943) 00:14:34.825 fused_ordering(944) 00:14:34.825 fused_ordering(945) 00:14:34.825 fused_ordering(946) 00:14:34.825 fused_ordering(947) 00:14:34.825 fused_ordering(948) 00:14:34.825 fused_ordering(949) 00:14:34.825 fused_ordering(950) 00:14:34.825 fused_ordering(951) 00:14:34.825 fused_ordering(952) 00:14:34.825 fused_ordering(953) 00:14:34.825 fused_ordering(954) 00:14:34.825 fused_ordering(955) 00:14:34.825 fused_ordering(956) 00:14:34.825 fused_ordering(957) 00:14:34.825 fused_ordering(958) 00:14:34.825 fused_ordering(959) 00:14:34.825 fused_ordering(960) 00:14:34.825 fused_ordering(961) 00:14:34.825 fused_ordering(962) 00:14:34.825 fused_ordering(963) 00:14:34.825 fused_ordering(964) 00:14:34.825 fused_ordering(965) 00:14:34.825 fused_ordering(966) 00:14:34.825 fused_ordering(967) 00:14:34.825 fused_ordering(968) 00:14:34.825 fused_ordering(969) 00:14:34.825 fused_ordering(970) 00:14:34.825 fused_ordering(971) 00:14:34.825 fused_ordering(972) 00:14:34.825 fused_ordering(973) 00:14:34.825 fused_ordering(974) 00:14:34.825 fused_ordering(975) 00:14:34.825 fused_ordering(976) 00:14:34.825 fused_ordering(977) 00:14:34.825 fused_ordering(978) 00:14:34.825 fused_ordering(979) 00:14:34.825 fused_ordering(980) 00:14:34.825 fused_ordering(981) 00:14:34.825 fused_ordering(982) 00:14:34.825 fused_ordering(983) 00:14:34.825 fused_ordering(984) 00:14:34.825 fused_ordering(985) 00:14:34.825 fused_ordering(986) 00:14:34.825 fused_ordering(987) 00:14:34.825 fused_ordering(988) 00:14:34.825 fused_ordering(989) 00:14:34.825 fused_ordering(990) 00:14:34.825 fused_ordering(991) 00:14:34.825 fused_ordering(992) 00:14:34.825 fused_ordering(993) 00:14:34.825 fused_ordering(994) 00:14:34.825 fused_ordering(995) 00:14:34.825 fused_ordering(996) 00:14:34.825 fused_ordering(997) 00:14:34.825 fused_ordering(998) 00:14:34.825 fused_ordering(999) 00:14:34.825 fused_ordering(1000) 00:14:34.825 fused_ordering(1001) 00:14:34.825 fused_ordering(1002) 00:14:34.825 fused_ordering(1003) 00:14:34.825 fused_ordering(1004) 00:14:34.825 fused_ordering(1005) 00:14:34.825 fused_ordering(1006) 00:14:34.825 fused_ordering(1007) 00:14:34.825 fused_ordering(1008) 00:14:34.825 fused_ordering(1009) 00:14:34.825 fused_ordering(1010) 00:14:34.825 fused_ordering(1011) 00:14:34.825 fused_ordering(1012) 00:14:34.825 fused_ordering(1013) 00:14:34.825 fused_ordering(1014) 00:14:34.825 fused_ordering(1015) 00:14:34.825 fused_ordering(1016) 00:14:34.825 fused_ordering(1017) 00:14:34.825 fused_ordering(1018) 00:14:34.825 fused_ordering(1019) 00:14:34.825 fused_ordering(1020) 00:14:34.825 fused_ordering(1021) 00:14:34.825 fused_ordering(1022) 00:14:34.825 fused_ordering(1023) 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:34.825 rmmod nvme_tcp 00:14:34.825 rmmod nvme_fabrics 00:14:34.825 rmmod nvme_keyring 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1320427 ']' 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1320427 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1320427 ']' 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1320427 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1320427 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1320427' 00:14:34.825 killing process with pid 1320427 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1320427 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1320427 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.825 00:39:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.362 00:39:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:37.362 00:14:37.362 real 0m10.538s 00:14:37.363 user 0m4.893s 00:14:37.363 sys 0m5.826s 00:14:37.363 00:39:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:37.363 00:39:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:37.363 ************************************ 00:14:37.363 END TEST nvmf_fused_ordering 00:14:37.363 ************************************ 00:14:37.363 00:39:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:37.363 00:39:48 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:37.363 00:39:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:37.363 00:39:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.363 00:39:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:37.363 ************************************ 00:14:37.363 START TEST nvmf_delete_subsystem 00:14:37.363 ************************************ 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:37.363 * Looking for test storage... 00:14:37.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:37.363 00:39:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:42.640 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:42.640 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:42.640 Found net devices under 0000:86:00.0: cvl_0_0 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:42.640 Found net devices under 0000:86:00.1: cvl_0_1 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:42.640 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:42.641 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:42.641 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:42.641 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:42.641 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.641 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:42.641 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:42.641 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:42.641 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:42.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:14:42.900 00:14:42.900 --- 10.0.0.2 ping statistics --- 00:14:42.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.900 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:42.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:14:42.900 00:14:42.900 --- 10.0.0.1 ping statistics --- 00:14:42.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.900 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1324413 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1324413 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1324413 ']' 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.900 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.158 [2024-07-13 00:39:54.474802] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:43.158 [2024-07-13 00:39:54.474849] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.158 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.158 [2024-07-13 00:39:54.532130] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:43.158 [2024-07-13 00:39:54.573669] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.158 [2024-07-13 00:39:54.573707] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.159 [2024-07-13 00:39:54.573714] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.159 [2024-07-13 00:39:54.573719] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.159 [2024-07-13 00:39:54.573724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.159 [2024-07-13 00:39:54.576241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.159 [2024-07-13 00:39:54.576244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.159 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.159 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:14:43.159 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:43.159 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:43.159 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.159 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.159 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.159 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.159 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.159 [2024-07-13 00:39:54.706074] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.159 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.159 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:43.159 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.159 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.416 [2024-07-13 00:39:54.726233] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.416 NULL1 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.416 Delay0 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1324443 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:43.416 00:39:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:43.416 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.416 [2024-07-13 00:39:54.816928] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:45.313 00:39:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.313 00:39:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.313 00:39:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 starting I/O failed: -6 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Write completed with error (sct=0, sc=8) 00:14:45.571 starting I/O failed: -6 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 starting I/O failed: -6 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 starting I/O failed: -6 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 starting I/O failed: -6 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Write completed with error (sct=0, sc=8) 00:14:45.571 starting I/O failed: -6 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Write completed with error (sct=0, sc=8) 00:14:45.571 starting I/O failed: -6 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 Read completed with error (sct=0, sc=8) 00:14:45.571 starting I/O failed: -6 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 starting I/O failed: -6 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 starting I/O failed: -6 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 starting I/O failed: -6 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 starting I/O failed: -6 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 [2024-07-13 00:39:56.984968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580410 is same with the state(5) to be set 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 starting I/O failed: -6 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 starting I/O failed: -6 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 starting I/O failed: -6 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 starting I/O failed: -6 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 starting I/O failed: -6 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 starting I/O failed: -6 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 starting I/O failed: -6 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 starting I/O failed: -6 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 starting I/O failed: -6 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 starting I/O failed: -6 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 [2024-07-13 00:39:56.985465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd73800cfe0 is same with the state(5) to be set 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Read completed with error (sct=0, sc=8) 00:14:45.572 Write completed with error (sct=0, sc=8) 00:14:46.508 [2024-07-13 00:39:57.953315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157e330 is same with the state(5) to be set 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 [2024-07-13 00:39:57.986739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd738000c00 is same with the state(5) to be set 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Write completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 [2024-07-13 00:39:57.986940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580230 is same with the state(5) to be set 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.508 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Write completed with error (sct=0, sc=8) 00:14:46.509 Write completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Write completed with error (sct=0, sc=8) 00:14:46.509 Write completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Write completed with error (sct=0, sc=8) 00:14:46.509 [2024-07-13 00:39:57.987075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15805f0 is same with the state(5) to be set 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Write completed with error (sct=0, sc=8) 00:14:46.509 Write completed with error (sct=0, sc=8) 00:14:46.509 Write completed with error (sct=0, sc=8) 00:14:46.509 Write completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Write completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Write completed with error (sct=0, sc=8) 00:14:46.509 Write completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 Write completed with error (sct=0, sc=8) 00:14:46.509 Read completed with error (sct=0, sc=8) 00:14:46.509 [2024-07-13 00:39:57.987626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd73800d600 is same with the state(5) to be set 00:14:46.509 Initializing NVMe Controllers 00:14:46.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:46.509 Controller IO queue size 128, less than required. 00:14:46.509 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:46.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:46.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:46.509 Initialization complete. Launching workers. 00:14:46.509 ======================================================== 00:14:46.509 Latency(us) 00:14:46.509 Device Information : IOPS MiB/s Average min max 00:14:46.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.93 0.09 880566.12 451.53 1009837.04 00:14:46.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.05 0.08 922909.49 240.23 1010052.54 00:14:46.509 ======================================================== 00:14:46.509 Total : 334.98 0.16 900544.15 240.23 1010052.54 00:14:46.509 00:14:46.509 [2024-07-13 00:39:57.988296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157e330 (9): Bad file descriptor 00:14:46.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:46.509 00:39:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.509 00:39:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:46.509 00:39:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1324443 00:14:46.509 00:39:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1324443 00:14:47.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1324443) - No such process 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1324443 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1324443 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1324443 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:47.077 [2024-07-13 00:39:58.516908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1325103 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1325103 00:14:47.077 00:39:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:47.077 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.077 [2024-07-13 00:39:58.588882] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:47.644 00:39:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:47.644 00:39:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1325103 00:14:47.644 00:39:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:48.211 00:39:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:48.211 00:39:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1325103 00:14:48.211 00:39:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:48.778 00:40:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:48.778 00:40:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1325103 00:14:48.778 00:40:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:49.036 00:40:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:49.036 00:40:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1325103 00:14:49.036 00:40:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:49.603 00:40:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:49.603 00:40:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1325103 00:14:49.603 00:40:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:50.171 00:40:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:50.171 00:40:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1325103 00:14:50.171 00:40:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:50.171 Initializing NVMe Controllers 00:14:50.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:50.171 Controller IO queue size 128, less than required. 00:14:50.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:50.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:50.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:50.171 Initialization complete. Launching workers. 00:14:50.171 ======================================================== 00:14:50.171 Latency(us) 00:14:50.171 Device Information : IOPS MiB/s Average min max 00:14:50.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001880.27 1000128.80 1006271.29 00:14:50.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003544.98 1000132.94 1009731.90 00:14:50.171 ======================================================== 00:14:50.171 Total : 256.00 0.12 1002712.63 1000128.80 1009731.90 00:14:50.171 00:14:50.739 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:50.739 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1325103 00:14:50.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1325103) - No such process 00:14:50.739 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1325103 00:14:50.739 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:50.739 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:50.739 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:50.739 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:50.739 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:50.739 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:50.739 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:50.740 rmmod nvme_tcp 00:14:50.740 rmmod nvme_fabrics 00:14:50.740 rmmod nvme_keyring 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1324413 ']' 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1324413 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1324413 ']' 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1324413 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1324413 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1324413' 00:14:50.740 killing process with pid 1324413 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1324413 00:14:50.740 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1324413 00:14:50.999 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:50.999 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:50.999 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:50.999 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:50.999 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:50.999 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.999 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.999 00:40:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.903 00:40:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:52.903 00:14:52.903 real 0m15.889s 00:14:52.903 user 0m29.161s 00:14:52.903 sys 0m5.160s 00:14:52.903 00:40:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:52.904 00:40:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:52.904 ************************************ 00:14:52.904 END TEST nvmf_delete_subsystem 00:14:52.904 ************************************ 00:14:52.904 00:40:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:52.904 00:40:04 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:52.904 00:40:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:52.904 00:40:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.904 00:40:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:53.163 ************************************ 00:14:53.163 START TEST nvmf_ns_masking 00:14:53.163 ************************************ 00:14:53.163 00:40:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:53.163 * Looking for test storage... 00:14:53.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:53.163 00:40:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.163 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:53.163 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.163 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.163 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.163 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.163 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.163 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.163 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.163 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9720f8c3-ed41-40e0-919f-6f011d548272 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=1fa03fa6-23b2-43c2-b16b-4dc6bae5d24e 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d7fed1dc-3db7-4328-b1c1-eb72886557a2 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:53.164 00:40:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:59.731 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:59.731 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:59.731 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:59.732 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:59.732 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:59.732 Found net devices under 0000:86:00.0: cvl_0_0 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:59.732 Found net devices under 0000:86:00.1: cvl_0_1 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:59.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:14:59.732 00:14:59.732 --- 10.0.0.2 ping statistics --- 00:14:59.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.732 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:59.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:14:59.732 00:14:59.732 --- 10.0.0.1 ping statistics --- 00:14:59.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.732 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1329125 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1329125 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1329125 ']' 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.732 00:40:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:59.732 [2024-07-13 00:40:10.484745] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:59.732 [2024-07-13 00:40:10.484787] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.732 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.732 [2024-07-13 00:40:10.553872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.732 [2024-07-13 00:40:10.593681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.732 [2024-07-13 00:40:10.593720] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.733 [2024-07-13 00:40:10.593727] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.733 [2024-07-13 00:40:10.593733] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.733 [2024-07-13 00:40:10.593738] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.733 [2024-07-13 00:40:10.593754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.733 00:40:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.733 00:40:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:59.733 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:59.733 00:40:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:59.733 00:40:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:59.733 00:40:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.733 00:40:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:59.733 [2024-07-13 00:40:10.870856] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.733 00:40:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:59.733 00:40:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:59.733 00:40:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:59.733 Malloc1 00:14:59.733 00:40:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:59.991 Malloc2 00:14:59.991 00:40:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:59.991 00:40:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:00.307 00:40:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.307 [2024-07-13 00:40:11.827981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.581 00:40:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:00.581 00:40:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d7fed1dc-3db7-4328-b1c1-eb72886557a2 -a 10.0.0.2 -s 4420 -i 4 00:15:00.581 00:40:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:00.581 00:40:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:00.581 00:40:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.581 00:40:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:00.581 00:40:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:02.481 00:40:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:02.481 00:40:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:02.481 00:40:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:02.481 00:40:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:02.481 00:40:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.481 00:40:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:02.481 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:02.481 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:02.739 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:02.739 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:02.739 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:02.739 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:02.739 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.739 [ 0]:0x1 00:15:02.739 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:02.739 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.739 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6b522641a0b14d07b407b6ea35660649 00:15:02.739 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6b522641a0b14d07b407b6ea35660649 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.739 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:02.739 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:02.739 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.739 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:02.739 [ 0]:0x1 00:15:02.739 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:02.739 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.998 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6b522641a0b14d07b407b6ea35660649 00:15:02.998 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6b522641a0b14d07b407b6ea35660649 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.998 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:02.998 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.998 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:02.998 [ 1]:0x2 00:15:02.998 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:02.998 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.998 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9ecb0e016ead4ce5ad42c1a5035a8834 00:15:02.998 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9ecb0e016ead4ce5ad42c1a5035a8834 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.998 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:02.998 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:03.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.256 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.256 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:03.514 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:03.514 00:40:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d7fed1dc-3db7-4328-b1c1-eb72886557a2 -a 10.0.0.2 -s 4420 -i 4 00:15:03.773 00:40:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:03.773 00:40:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:03.773 00:40:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.773 00:40:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:03.773 00:40:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:03.773 00:40:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:05.677 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:05.936 [ 0]:0x2 00:15:05.936 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:05.936 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:05.936 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9ecb0e016ead4ce5ad42c1a5035a8834 00:15:05.936 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9ecb0e016ead4ce5ad42c1a5035a8834 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:05.936 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:06.195 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:06.195 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:06.195 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:06.195 [ 0]:0x1 00:15:06.195 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:06.195 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:06.195 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6b522641a0b14d07b407b6ea35660649 00:15:06.195 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6b522641a0b14d07b407b6ea35660649 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:06.195 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:06.195 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:06.195 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:06.195 [ 1]:0x2 00:15:06.195 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:06.195 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:06.195 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9ecb0e016ead4ce5ad42c1a5035a8834 00:15:06.195 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9ecb0e016ead4ce5ad42c1a5035a8834 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:06.195 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:06.454 [ 0]:0x2 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9ecb0e016ead4ce5ad42c1a5035a8834 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9ecb0e016ead4ce5ad42c1a5035a8834 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:06.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.454 00:40:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:06.712 00:40:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:06.712 00:40:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d7fed1dc-3db7-4328-b1c1-eb72886557a2 -a 10.0.0.2 -s 4420 -i 4 00:15:06.712 00:40:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:06.712 00:40:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:06.712 00:40:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:06.712 00:40:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:06.712 00:40:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:06.712 00:40:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:09.245 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:09.245 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:09.245 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:09.245 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:09.245 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:09.245 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:09.245 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:09.245 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:09.245 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:09.245 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:09.245 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:09.245 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:09.246 [ 0]:0x1 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6b522641a0b14d07b407b6ea35660649 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6b522641a0b14d07b407b6ea35660649 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:09.246 [ 1]:0x2 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9ecb0e016ead4ce5ad42c1a5035a8834 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9ecb0e016ead4ce5ad42c1a5035a8834 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:09.246 [ 0]:0x2 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9ecb0e016ead4ce5ad42c1a5035a8834 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9ecb0e016ead4ce5ad42c1a5035a8834 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:09.246 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:09.505 [2024-07-13 00:40:20.913987] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:09.505 request: 00:15:09.505 { 00:15:09.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.505 "nsid": 2, 00:15:09.505 "host": "nqn.2016-06.io.spdk:host1", 00:15:09.505 "method": "nvmf_ns_remove_host", 00:15:09.505 "req_id": 1 00:15:09.505 } 00:15:09.505 Got JSON-RPC error response 00:15:09.505 response: 00:15:09.505 { 00:15:09.505 "code": -32602, 00:15:09.505 "message": "Invalid parameters" 00:15:09.505 } 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.505 00:40:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:09.505 [ 0]:0x2 00:15:09.505 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:09.505 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.505 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9ecb0e016ead4ce5ad42c1a5035a8834 00:15:09.505 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9ecb0e016ead4ce5ad42c1a5035a8834 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.505 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:09.505 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:09.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.764 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1331056 00:15:09.765 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:09.765 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.765 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1331056 /var/tmp/host.sock 00:15:09.765 00:40:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1331056 ']' 00:15:09.765 00:40:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:09.765 00:40:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.765 00:40:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:09.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:09.765 00:40:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.765 00:40:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:09.765 [2024-07-13 00:40:21.139294] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:09.765 [2024-07-13 00:40:21.139339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331056 ] 00:15:09.765 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.765 [2024-07-13 00:40:21.207279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.765 [2024-07-13 00:40:21.247574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.023 00:40:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.023 00:40:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:15:10.023 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.282 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:10.282 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9720f8c3-ed41-40e0-919f-6f011d548272 00:15:10.282 00:40:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:10.282 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9720F8C3ED4140E0919F6F011D548272 -i 00:15:10.540 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 1fa03fa6-23b2-43c2-b16b-4dc6bae5d24e 00:15:10.540 00:40:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:10.540 00:40:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 1FA03FA623B243C2B16B4DC6BAE5D24E -i 00:15:10.799 00:40:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:10.799 00:40:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:11.058 00:40:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:11.058 00:40:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:11.317 nvme0n1 00:15:11.317 00:40:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:11.317 00:40:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:11.594 nvme1n2 00:15:11.594 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:11.594 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:11.594 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:11.595 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:11.595 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:11.853 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:11.853 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:11.853 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:11.853 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9720f8c3-ed41-40e0-919f-6f011d548272 == \9\7\2\0\f\8\c\3\-\e\d\4\1\-\4\0\e\0\-\9\1\9\f\-\6\f\0\1\1\d\5\4\8\2\7\2 ]] 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 1fa03fa6-23b2-43c2-b16b-4dc6bae5d24e == \1\f\a\0\3\f\a\6\-\2\3\b\2\-\4\3\c\2\-\b\1\6\b\-\4\d\c\6\b\a\e\5\d\2\4\e ]] 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1331056 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1331056 ']' 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1331056 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1331056 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1331056' 00:15:12.111 killing process with pid 1331056 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1331056 00:15:12.111 00:40:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1331056 00:15:12.678 00:40:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:12.678 rmmod nvme_tcp 00:15:12.678 rmmod nvme_fabrics 00:15:12.678 rmmod nvme_keyring 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1329125 ']' 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1329125 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1329125 ']' 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1329125 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.678 00:40:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1329125 00:15:12.679 00:40:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:12.679 00:40:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:12.679 00:40:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1329125' 00:15:12.679 killing process with pid 1329125 00:15:12.679 00:40:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1329125 00:15:12.679 00:40:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1329125 00:15:12.938 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:12.938 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:12.938 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:12.938 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.938 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.938 00:40:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.938 00:40:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.938 00:40:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.474 00:40:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:15.474 00:15:15.474 real 0m22.031s 00:15:15.474 user 0m22.944s 00:15:15.474 sys 0m6.373s 00:15:15.474 00:40:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:15.474 00:40:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:15.474 ************************************ 00:15:15.474 END TEST nvmf_ns_masking 00:15:15.474 ************************************ 00:15:15.474 00:40:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:15.474 00:40:26 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:15.474 00:40:26 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:15.474 00:40:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:15.474 00:40:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.474 00:40:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:15.474 ************************************ 00:15:15.474 START TEST nvmf_nvme_cli 00:15:15.474 ************************************ 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:15.474 * Looking for test storage... 00:15:15.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:15.474 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:15.475 00:40:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:20.748 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.748 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:20.749 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:20.749 Found net devices under 0000:86:00.0: cvl_0_0 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:20.749 Found net devices under 0000:86:00.1: cvl_0_1 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:20.749 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:21.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:15:21.008 00:15:21.008 --- 10.0.0.2 ping statistics --- 00:15:21.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.008 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:21.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:15:21.008 00:15:21.008 --- 10.0.0.1 ping statistics --- 00:15:21.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.008 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1335087 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1335087 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1335087 ']' 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:21.008 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:21.008 [2024-07-13 00:40:32.535505] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:21.008 [2024-07-13 00:40:32.535555] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.008 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.264 [2024-07-13 00:40:32.594603] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:21.264 [2024-07-13 00:40:32.639429] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.264 [2024-07-13 00:40:32.639467] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.264 [2024-07-13 00:40:32.639475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.264 [2024-07-13 00:40:32.639481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.264 [2024-07-13 00:40:32.639486] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.264 [2024-07-13 00:40:32.639532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.264 [2024-07-13 00:40:32.639643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.264 [2024-07-13 00:40:32.639659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:21.264 [2024-07-13 00:40:32.639665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.264 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.264 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:15:21.264 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:21.264 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:21.264 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:21.264 00:40:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.264 00:40:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:21.264 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.264 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:21.264 [2024-07-13 00:40:32.789244] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.265 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.265 00:40:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:21.265 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.265 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:21.265 Malloc0 00:15:21.265 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.265 00:40:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:21.265 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.265 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:21.520 Malloc1 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:21.520 [2024-07-13 00:40:32.870358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.520 00:40:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:21.520 00:15:21.520 Discovery Log Number of Records 2, Generation counter 2 00:15:21.520 =====Discovery Log Entry 0====== 00:15:21.520 trtype: tcp 00:15:21.520 adrfam: ipv4 00:15:21.520 subtype: current discovery subsystem 00:15:21.520 treq: not required 00:15:21.520 portid: 0 00:15:21.520 trsvcid: 4420 00:15:21.520 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:21.520 traddr: 10.0.0.2 00:15:21.520 eflags: explicit discovery connections, duplicate discovery information 00:15:21.520 sectype: none 00:15:21.520 =====Discovery Log Entry 1====== 00:15:21.520 trtype: tcp 00:15:21.520 adrfam: ipv4 00:15:21.520 subtype: nvme subsystem 00:15:21.520 treq: not required 00:15:21.520 portid: 0 00:15:21.520 trsvcid: 4420 00:15:21.520 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:21.520 traddr: 10.0.0.2 00:15:21.520 eflags: none 00:15:21.520 sectype: none 00:15:21.520 00:40:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:21.520 00:40:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:21.520 00:40:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:21.520 00:40:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:21.520 00:40:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:21.520 00:40:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:21.520 00:40:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:21.520 00:40:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:21.520 00:40:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:21.520 00:40:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:21.520 00:40:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:22.899 00:40:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:22.899 00:40:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:22.899 00:40:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:22.899 00:40:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:22.899 00:40:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:22.899 00:40:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:24.857 /dev/nvme0n1 ]] 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:24.857 00:40:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:25.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.114 00:40:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:25.114 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:25.114 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:25.114 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.114 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:25.114 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.371 rmmod nvme_tcp 00:15:25.371 rmmod nvme_fabrics 00:15:25.371 rmmod nvme_keyring 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1335087 ']' 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1335087 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1335087 ']' 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1335087 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1335087 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1335087' 00:15:25.371 killing process with pid 1335087 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1335087 00:15:25.371 00:40:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1335087 00:15:25.630 00:40:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:25.630 00:40:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:25.630 00:40:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:25.630 00:40:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.630 00:40:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:25.630 00:40:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.630 00:40:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.630 00:40:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.537 00:40:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:27.537 00:15:27.537 real 0m12.495s 00:15:27.537 user 0m18.955s 00:15:27.537 sys 0m4.881s 00:15:27.537 00:40:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.537 00:40:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:27.537 ************************************ 00:15:27.537 END TEST nvmf_nvme_cli 00:15:27.537 ************************************ 00:15:27.797 00:40:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:27.797 00:40:39 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:27.797 00:40:39 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:27.797 00:40:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:27.797 00:40:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.797 00:40:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:27.797 ************************************ 00:15:27.797 START TEST nvmf_vfio_user 00:15:27.797 ************************************ 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:27.797 * Looking for test storage... 00:15:27.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1336221 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1336221' 00:15:27.797 Process pid: 1336221 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1336221 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1336221 ']' 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.797 00:40:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:27.797 [2024-07-13 00:40:39.329137] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:27.798 [2024-07-13 00:40:39.329183] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.798 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.056 [2024-07-13 00:40:39.396359] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:28.056 [2024-07-13 00:40:39.438319] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.056 [2024-07-13 00:40:39.438357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.056 [2024-07-13 00:40:39.438363] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.056 [2024-07-13 00:40:39.438370] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.056 [2024-07-13 00:40:39.438375] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.056 [2024-07-13 00:40:39.438446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.056 [2024-07-13 00:40:39.438556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.056 [2024-07-13 00:40:39.438662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.056 [2024-07-13 00:40:39.438664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:28.056 00:40:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.056 00:40:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:28.056 00:40:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:28.990 00:40:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:29.248 00:40:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:29.248 00:40:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:29.248 00:40:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:29.248 00:40:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:29.248 00:40:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:29.505 Malloc1 00:15:29.505 00:40:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:29.763 00:40:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:30.021 00:40:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:30.021 00:40:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:30.021 00:40:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:30.021 00:40:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:30.278 Malloc2 00:15:30.278 00:40:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:30.536 00:40:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:30.536 00:40:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:30.794 00:40:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:30.794 00:40:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:30.794 00:40:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:30.794 00:40:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:30.794 00:40:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:30.794 00:40:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:30.794 [2024-07-13 00:40:42.288472] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:30.794 [2024-07-13 00:40:42.288520] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336840 ] 00:15:30.794 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.794 [2024-07-13 00:40:42.317767] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:30.794 [2024-07-13 00:40:42.325489] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:30.794 [2024-07-13 00:40:42.325512] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0c3759f000 00:15:30.794 [2024-07-13 00:40:42.326488] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:30.794 [2024-07-13 00:40:42.327486] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:30.794 [2024-07-13 00:40:42.328492] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:30.794 [2024-07-13 00:40:42.329498] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:30.794 [2024-07-13 00:40:42.330504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:30.794 [2024-07-13 00:40:42.331513] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:30.794 [2024-07-13 00:40:42.332516] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:30.794 [2024-07-13 00:40:42.333526] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:30.794 [2024-07-13 00:40:42.334530] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:30.794 [2024-07-13 00:40:42.334539] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0c36365000 00:15:30.794 [2024-07-13 00:40:42.335482] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:30.794 [2024-07-13 00:40:42.346052] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:30.794 [2024-07-13 00:40:42.346072] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:30.794 [2024-07-13 00:40:42.350657] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:30.794 [2024-07-13 00:40:42.350692] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:30.795 [2024-07-13 00:40:42.350760] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:30.795 [2024-07-13 00:40:42.350776] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:30.795 [2024-07-13 00:40:42.350781] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:30.795 [2024-07-13 00:40:42.353232] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:30.795 [2024-07-13 00:40:42.353241] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:30.795 [2024-07-13 00:40:42.353248] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:30.795 [2024-07-13 00:40:42.353662] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:30.795 [2024-07-13 00:40:42.353670] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:30.795 [2024-07-13 00:40:42.353676] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:31.054 [2024-07-13 00:40:42.354672] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:31.054 [2024-07-13 00:40:42.354681] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:31.054 [2024-07-13 00:40:42.355676] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:31.054 [2024-07-13 00:40:42.355682] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:31.054 [2024-07-13 00:40:42.355687] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:31.054 [2024-07-13 00:40:42.355692] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:31.054 [2024-07-13 00:40:42.355800] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:31.054 [2024-07-13 00:40:42.355804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:31.054 [2024-07-13 00:40:42.355809] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:31.054 [2024-07-13 00:40:42.356685] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:31.054 [2024-07-13 00:40:42.357687] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:31.054 [2024-07-13 00:40:42.358693] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:31.054 [2024-07-13 00:40:42.359697] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:31.054 [2024-07-13 00:40:42.359758] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:31.054 [2024-07-13 00:40:42.360708] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:31.054 [2024-07-13 00:40:42.360714] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:31.054 [2024-07-13 00:40:42.360719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.360736] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:31.055 [2024-07-13 00:40:42.360746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.360759] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:31.055 [2024-07-13 00:40:42.360764] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.055 [2024-07-13 00:40:42.360776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.055 [2024-07-13 00:40:42.360822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:31.055 [2024-07-13 00:40:42.360830] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:31.055 [2024-07-13 00:40:42.360836] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:31.055 [2024-07-13 00:40:42.360840] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:31.055 [2024-07-13 00:40:42.360844] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:31.055 [2024-07-13 00:40:42.360848] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:31.055 [2024-07-13 00:40:42.360852] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:31.055 [2024-07-13 00:40:42.360856] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.360862] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.360873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:31.055 [2024-07-13 00:40:42.360883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:31.055 [2024-07-13 00:40:42.360897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.055 [2024-07-13 00:40:42.360904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.055 [2024-07-13 00:40:42.360912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.055 [2024-07-13 00:40:42.360919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.055 [2024-07-13 00:40:42.360923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.360931] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.360939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:31.055 [2024-07-13 00:40:42.360947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:31.055 [2024-07-13 00:40:42.360952] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:31.055 [2024-07-13 00:40:42.360956] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.360962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.360967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.360974] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:31.055 [2024-07-13 00:40:42.360987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:31.055 [2024-07-13 00:40:42.361035] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.361042] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.361049] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:31.055 [2024-07-13 00:40:42.361053] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:31.055 [2024-07-13 00:40:42.361059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:31.055 [2024-07-13 00:40:42.361075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:31.055 [2024-07-13 00:40:42.361083] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:31.055 [2024-07-13 00:40:42.361091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.361097] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.361105] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:31.055 [2024-07-13 00:40:42.361109] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.055 [2024-07-13 00:40:42.361114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.055 [2024-07-13 00:40:42.361132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:31.055 [2024-07-13 00:40:42.361144] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.361151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.361157] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:31.055 [2024-07-13 00:40:42.361161] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.055 [2024-07-13 00:40:42.361166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.055 [2024-07-13 00:40:42.361179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:31.055 [2024-07-13 00:40:42.361186] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.361192] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.361199] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.361204] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.361208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.361213] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.361217] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:31.055 [2024-07-13 00:40:42.361221] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:31.055 [2024-07-13 00:40:42.361230] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:31.055 [2024-07-13 00:40:42.361246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:31.055 [2024-07-13 00:40:42.361257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:31.055 [2024-07-13 00:40:42.361267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:31.055 [2024-07-13 00:40:42.361279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:31.055 [2024-07-13 00:40:42.361289] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:31.055 [2024-07-13 00:40:42.361300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:31.055 [2024-07-13 00:40:42.361310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:31.055 [2024-07-13 00:40:42.361318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:31.055 [2024-07-13 00:40:42.361330] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:31.055 [2024-07-13 00:40:42.361334] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:31.055 [2024-07-13 00:40:42.361338] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:31.055 [2024-07-13 00:40:42.361341] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:31.055 [2024-07-13 00:40:42.361346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:31.055 [2024-07-13 00:40:42.361352] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:31.055 [2024-07-13 00:40:42.361356] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:31.055 [2024-07-13 00:40:42.361362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:31.055 [2024-07-13 00:40:42.361368] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:31.055 [2024-07-13 00:40:42.361371] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.055 [2024-07-13 00:40:42.361377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.055 [2024-07-13 00:40:42.361383] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:31.055 [2024-07-13 00:40:42.361387] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:31.055 [2024-07-13 00:40:42.361393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:31.055 [2024-07-13 00:40:42.361399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:31.055 [2024-07-13 00:40:42.361409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:31.055 [2024-07-13 00:40:42.361419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:31.055 [2024-07-13 00:40:42.361425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:31.055 ===================================================== 00:15:31.056 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:31.056 ===================================================== 00:15:31.056 Controller Capabilities/Features 00:15:31.056 ================================ 00:15:31.056 Vendor ID: 4e58 00:15:31.056 Subsystem Vendor ID: 4e58 00:15:31.056 Serial Number: SPDK1 00:15:31.056 Model Number: SPDK bdev Controller 00:15:31.056 Firmware Version: 24.09 00:15:31.056 Recommended Arb Burst: 6 00:15:31.056 IEEE OUI Identifier: 8d 6b 50 00:15:31.056 Multi-path I/O 00:15:31.056 May have multiple subsystem ports: Yes 00:15:31.056 May have multiple controllers: Yes 00:15:31.056 Associated with SR-IOV VF: No 00:15:31.056 Max Data Transfer Size: 131072 00:15:31.056 Max Number of Namespaces: 32 00:15:31.056 Max Number of I/O Queues: 127 00:15:31.056 NVMe Specification Version (VS): 1.3 00:15:31.056 NVMe Specification Version (Identify): 1.3 00:15:31.056 Maximum Queue Entries: 256 00:15:31.056 Contiguous Queues Required: Yes 00:15:31.056 Arbitration Mechanisms Supported 00:15:31.056 Weighted Round Robin: Not Supported 00:15:31.056 Vendor Specific: Not Supported 00:15:31.056 Reset Timeout: 15000 ms 00:15:31.056 Doorbell Stride: 4 bytes 00:15:31.056 NVM Subsystem Reset: Not Supported 00:15:31.056 Command Sets Supported 00:15:31.056 NVM Command Set: Supported 00:15:31.056 Boot Partition: Not Supported 00:15:31.056 Memory Page Size Minimum: 4096 bytes 00:15:31.056 Memory Page Size Maximum: 4096 bytes 00:15:31.056 Persistent Memory Region: Not Supported 00:15:31.056 Optional Asynchronous Events Supported 00:15:31.056 Namespace Attribute Notices: Supported 00:15:31.056 Firmware Activation Notices: Not Supported 00:15:31.056 ANA Change Notices: Not Supported 00:15:31.056 PLE Aggregate Log Change Notices: Not Supported 00:15:31.056 LBA Status Info Alert Notices: Not Supported 00:15:31.056 EGE Aggregate Log Change Notices: Not Supported 00:15:31.056 Normal NVM Subsystem Shutdown event: Not Supported 00:15:31.056 Zone Descriptor Change Notices: Not Supported 00:15:31.056 Discovery Log Change Notices: Not Supported 00:15:31.056 Controller Attributes 00:15:31.056 128-bit Host Identifier: Supported 00:15:31.056 Non-Operational Permissive Mode: Not Supported 00:15:31.056 NVM Sets: Not Supported 00:15:31.056 Read Recovery Levels: Not Supported 00:15:31.056 Endurance Groups: Not Supported 00:15:31.056 Predictable Latency Mode: Not Supported 00:15:31.056 Traffic Based Keep ALive: Not Supported 00:15:31.056 Namespace Granularity: Not Supported 00:15:31.056 SQ Associations: Not Supported 00:15:31.056 UUID List: Not Supported 00:15:31.056 Multi-Domain Subsystem: Not Supported 00:15:31.056 Fixed Capacity Management: Not Supported 00:15:31.056 Variable Capacity Management: Not Supported 00:15:31.056 Delete Endurance Group: Not Supported 00:15:31.056 Delete NVM Set: Not Supported 00:15:31.056 Extended LBA Formats Supported: Not Supported 00:15:31.056 Flexible Data Placement Supported: Not Supported 00:15:31.056 00:15:31.056 Controller Memory Buffer Support 00:15:31.056 ================================ 00:15:31.056 Supported: No 00:15:31.056 00:15:31.056 Persistent Memory Region Support 00:15:31.056 ================================ 00:15:31.056 Supported: No 00:15:31.056 00:15:31.056 Admin Command Set Attributes 00:15:31.056 ============================ 00:15:31.056 Security Send/Receive: Not Supported 00:15:31.056 Format NVM: Not Supported 00:15:31.056 Firmware Activate/Download: Not Supported 00:15:31.056 Namespace Management: Not Supported 00:15:31.056 Device Self-Test: Not Supported 00:15:31.056 Directives: Not Supported 00:15:31.056 NVMe-MI: Not Supported 00:15:31.056 Virtualization Management: Not Supported 00:15:31.056 Doorbell Buffer Config: Not Supported 00:15:31.056 Get LBA Status Capability: Not Supported 00:15:31.056 Command & Feature Lockdown Capability: Not Supported 00:15:31.056 Abort Command Limit: 4 00:15:31.056 Async Event Request Limit: 4 00:15:31.056 Number of Firmware Slots: N/A 00:15:31.056 Firmware Slot 1 Read-Only: N/A 00:15:31.056 Firmware Activation Without Reset: N/A 00:15:31.056 Multiple Update Detection Support: N/A 00:15:31.056 Firmware Update Granularity: No Information Provided 00:15:31.056 Per-Namespace SMART Log: No 00:15:31.056 Asymmetric Namespace Access Log Page: Not Supported 00:15:31.056 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:31.056 Command Effects Log Page: Supported 00:15:31.056 Get Log Page Extended Data: Supported 00:15:31.056 Telemetry Log Pages: Not Supported 00:15:31.056 Persistent Event Log Pages: Not Supported 00:15:31.056 Supported Log Pages Log Page: May Support 00:15:31.056 Commands Supported & Effects Log Page: Not Supported 00:15:31.056 Feature Identifiers & Effects Log Page:May Support 00:15:31.056 NVMe-MI Commands & Effects Log Page: May Support 00:15:31.056 Data Area 4 for Telemetry Log: Not Supported 00:15:31.056 Error Log Page Entries Supported: 128 00:15:31.056 Keep Alive: Supported 00:15:31.056 Keep Alive Granularity: 10000 ms 00:15:31.056 00:15:31.056 NVM Command Set Attributes 00:15:31.056 ========================== 00:15:31.056 Submission Queue Entry Size 00:15:31.056 Max: 64 00:15:31.056 Min: 64 00:15:31.056 Completion Queue Entry Size 00:15:31.056 Max: 16 00:15:31.056 Min: 16 00:15:31.056 Number of Namespaces: 32 00:15:31.056 Compare Command: Supported 00:15:31.056 Write Uncorrectable Command: Not Supported 00:15:31.056 Dataset Management Command: Supported 00:15:31.056 Write Zeroes Command: Supported 00:15:31.056 Set Features Save Field: Not Supported 00:15:31.056 Reservations: Not Supported 00:15:31.056 Timestamp: Not Supported 00:15:31.056 Copy: Supported 00:15:31.056 Volatile Write Cache: Present 00:15:31.056 Atomic Write Unit (Normal): 1 00:15:31.056 Atomic Write Unit (PFail): 1 00:15:31.056 Atomic Compare & Write Unit: 1 00:15:31.056 Fused Compare & Write: Supported 00:15:31.056 Scatter-Gather List 00:15:31.056 SGL Command Set: Supported (Dword aligned) 00:15:31.056 SGL Keyed: Not Supported 00:15:31.056 SGL Bit Bucket Descriptor: Not Supported 00:15:31.056 SGL Metadata Pointer: Not Supported 00:15:31.056 Oversized SGL: Not Supported 00:15:31.056 SGL Metadata Address: Not Supported 00:15:31.056 SGL Offset: Not Supported 00:15:31.056 Transport SGL Data Block: Not Supported 00:15:31.056 Replay Protected Memory Block: Not Supported 00:15:31.056 00:15:31.056 Firmware Slot Information 00:15:31.056 ========================= 00:15:31.056 Active slot: 1 00:15:31.056 Slot 1 Firmware Revision: 24.09 00:15:31.056 00:15:31.056 00:15:31.056 Commands Supported and Effects 00:15:31.056 ============================== 00:15:31.056 Admin Commands 00:15:31.056 -------------- 00:15:31.056 Get Log Page (02h): Supported 00:15:31.056 Identify (06h): Supported 00:15:31.056 Abort (08h): Supported 00:15:31.056 Set Features (09h): Supported 00:15:31.056 Get Features (0Ah): Supported 00:15:31.056 Asynchronous Event Request (0Ch): Supported 00:15:31.056 Keep Alive (18h): Supported 00:15:31.056 I/O Commands 00:15:31.056 ------------ 00:15:31.056 Flush (00h): Supported LBA-Change 00:15:31.056 Write (01h): Supported LBA-Change 00:15:31.056 Read (02h): Supported 00:15:31.056 Compare (05h): Supported 00:15:31.056 Write Zeroes (08h): Supported LBA-Change 00:15:31.056 Dataset Management (09h): Supported LBA-Change 00:15:31.056 Copy (19h): Supported LBA-Change 00:15:31.056 00:15:31.056 Error Log 00:15:31.056 ========= 00:15:31.056 00:15:31.056 Arbitration 00:15:31.056 =========== 00:15:31.056 Arbitration Burst: 1 00:15:31.056 00:15:31.056 Power Management 00:15:31.056 ================ 00:15:31.056 Number of Power States: 1 00:15:31.056 Current Power State: Power State #0 00:15:31.056 Power State #0: 00:15:31.056 Max Power: 0.00 W 00:15:31.056 Non-Operational State: Operational 00:15:31.056 Entry Latency: Not Reported 00:15:31.056 Exit Latency: Not Reported 00:15:31.057 Relative Read Throughput: 0 00:15:31.057 Relative Read Latency: 0 00:15:31.057 Relative Write Throughput: 0 00:15:31.057 Relative Write Latency: 0 00:15:31.057 Idle Power: Not Reported 00:15:31.057 Active Power: Not Reported 00:15:31.057 Non-Operational Permissive Mode: Not Supported 00:15:31.057 00:15:31.057 Health Information 00:15:31.057 ================== 00:15:31.057 Critical Warnings: 00:15:31.057 Available Spare Space: OK 00:15:31.057 Temperature: OK 00:15:31.057 Device Reliability: OK 00:15:31.057 Read Only: No 00:15:31.057 Volatile Memory Backup: OK 00:15:31.057 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:31.057 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:31.057 Available Spare: 0% 00:15:31.057 Available Sp[2024-07-13 00:40:42.361513] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:31.057 [2024-07-13 00:40:42.361522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:31.057 [2024-07-13 00:40:42.361549] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:31.057 [2024-07-13 00:40:42.361558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.057 [2024-07-13 00:40:42.361563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.057 [2024-07-13 00:40:42.361568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.057 [2024-07-13 00:40:42.361574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.057 [2024-07-13 00:40:42.361713] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:31.057 [2024-07-13 00:40:42.361723] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:31.057 [2024-07-13 00:40:42.362720] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:31.057 [2024-07-13 00:40:42.362767] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:31.057 [2024-07-13 00:40:42.362773] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:31.057 [2024-07-13 00:40:42.363722] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:31.057 [2024-07-13 00:40:42.363731] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:31.057 [2024-07-13 00:40:42.363781] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:31.057 [2024-07-13 00:40:42.369231] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:31.057 are Threshold: 0% 00:15:31.057 Life Percentage Used: 0% 00:15:31.057 Data Units Read: 0 00:15:31.057 Data Units Written: 0 00:15:31.057 Host Read Commands: 0 00:15:31.057 Host Write Commands: 0 00:15:31.057 Controller Busy Time: 0 minutes 00:15:31.057 Power Cycles: 0 00:15:31.057 Power On Hours: 0 hours 00:15:31.057 Unsafe Shutdowns: 0 00:15:31.057 Unrecoverable Media Errors: 0 00:15:31.057 Lifetime Error Log Entries: 0 00:15:31.057 Warning Temperature Time: 0 minutes 00:15:31.057 Critical Temperature Time: 0 minutes 00:15:31.057 00:15:31.057 Number of Queues 00:15:31.057 ================ 00:15:31.057 Number of I/O Submission Queues: 127 00:15:31.057 Number of I/O Completion Queues: 127 00:15:31.057 00:15:31.057 Active Namespaces 00:15:31.057 ================= 00:15:31.057 Namespace ID:1 00:15:31.057 Error Recovery Timeout: Unlimited 00:15:31.057 Command Set Identifier: NVM (00h) 00:15:31.057 Deallocate: Supported 00:15:31.057 Deallocated/Unwritten Error: Not Supported 00:15:31.057 Deallocated Read Value: Unknown 00:15:31.057 Deallocate in Write Zeroes: Not Supported 00:15:31.057 Deallocated Guard Field: 0xFFFF 00:15:31.057 Flush: Supported 00:15:31.057 Reservation: Supported 00:15:31.057 Namespace Sharing Capabilities: Multiple Controllers 00:15:31.057 Size (in LBAs): 131072 (0GiB) 00:15:31.057 Capacity (in LBAs): 131072 (0GiB) 00:15:31.057 Utilization (in LBAs): 131072 (0GiB) 00:15:31.057 NGUID: C1FC86D990EE4A929683BA6564AFD774 00:15:31.057 UUID: c1fc86d9-90ee-4a92-9683-ba6564afd774 00:15:31.057 Thin Provisioning: Not Supported 00:15:31.057 Per-NS Atomic Units: Yes 00:15:31.057 Atomic Boundary Size (Normal): 0 00:15:31.057 Atomic Boundary Size (PFail): 0 00:15:31.057 Atomic Boundary Offset: 0 00:15:31.057 Maximum Single Source Range Length: 65535 00:15:31.057 Maximum Copy Length: 65535 00:15:31.057 Maximum Source Range Count: 1 00:15:31.057 NGUID/EUI64 Never Reused: No 00:15:31.057 Namespace Write Protected: No 00:15:31.057 Number of LBA Formats: 1 00:15:31.057 Current LBA Format: LBA Format #00 00:15:31.057 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:31.057 00:15:31.057 00:40:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:31.057 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.057 [2024-07-13 00:40:42.594070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:36.320 Initializing NVMe Controllers 00:15:36.320 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:36.320 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:36.320 Initialization complete. Launching workers. 00:15:36.320 ======================================================== 00:15:36.320 Latency(us) 00:15:36.320 Device Information : IOPS MiB/s Average min max 00:15:36.320 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39967.60 156.12 3203.13 967.88 6631.83 00:15:36.320 ======================================================== 00:15:36.320 Total : 39967.60 156.12 3203.13 967.88 6631.83 00:15:36.320 00:15:36.320 [2024-07-13 00:40:47.615446] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:36.320 00:40:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:36.320 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.320 [2024-07-13 00:40:47.840496] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:41.580 Initializing NVMe Controllers 00:15:41.580 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:41.580 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:41.580 Initialization complete. Launching workers. 00:15:41.580 ======================================================== 00:15:41.580 Latency(us) 00:15:41.580 Device Information : IOPS MiB/s Average min max 00:15:41.580 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16042.47 62.67 7978.17 6979.43 8049.94 00:15:41.580 ======================================================== 00:15:41.580 Total : 16042.47 62.67 7978.17 6979.43 8049.94 00:15:41.580 00:15:41.580 [2024-07-13 00:40:52.873956] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:41.580 00:40:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:41.580 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.580 [2024-07-13 00:40:53.069938] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:46.842 [2024-07-13 00:40:58.138489] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:46.842 Initializing NVMe Controllers 00:15:46.842 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:46.842 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:46.842 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:46.842 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:46.842 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:46.842 Initialization complete. Launching workers. 00:15:46.842 Starting thread on core 2 00:15:46.842 Starting thread on core 3 00:15:46.842 Starting thread on core 1 00:15:46.842 00:40:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:46.842 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.101 [2024-07-13 00:40:58.422630] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:50.387 [2024-07-13 00:41:01.484799] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:50.388 Initializing NVMe Controllers 00:15:50.388 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:50.388 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:50.388 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:50.388 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:50.388 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:50.388 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:50.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:50.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:50.388 Initialization complete. Launching workers. 00:15:50.388 Starting thread on core 1 with urgent priority queue 00:15:50.388 Starting thread on core 2 with urgent priority queue 00:15:50.388 Starting thread on core 3 with urgent priority queue 00:15:50.388 Starting thread on core 0 with urgent priority queue 00:15:50.388 SPDK bdev Controller (SPDK1 ) core 0: 9366.33 IO/s 10.68 secs/100000 ios 00:15:50.388 SPDK bdev Controller (SPDK1 ) core 1: 8088.00 IO/s 12.36 secs/100000 ios 00:15:50.388 SPDK bdev Controller (SPDK1 ) core 2: 7742.67 IO/s 12.92 secs/100000 ios 00:15:50.388 SPDK bdev Controller (SPDK1 ) core 3: 9711.00 IO/s 10.30 secs/100000 ios 00:15:50.388 ======================================================== 00:15:50.388 00:15:50.388 00:41:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:50.388 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.388 [2024-07-13 00:41:01.758660] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:50.388 Initializing NVMe Controllers 00:15:50.388 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:50.388 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:50.388 Namespace ID: 1 size: 0GB 00:15:50.388 Initialization complete. 00:15:50.388 INFO: using host memory buffer for IO 00:15:50.388 Hello world! 00:15:50.388 [2024-07-13 00:41:01.793915] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:50.388 00:41:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:50.388 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.645 [2024-07-13 00:41:02.060599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:51.656 Initializing NVMe Controllers 00:15:51.656 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:51.656 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:51.656 Initialization complete. Launching workers. 00:15:51.656 submit (in ns) avg, min, max = 7857.4, 3251.3, 4000920.9 00:15:51.656 complete (in ns) avg, min, max = 20318.4, 1795.7, 4995863.5 00:15:51.656 00:15:51.656 Submit histogram 00:15:51.656 ================ 00:15:51.656 Range in us Cumulative Count 00:15:51.656 3.242 - 3.256: 0.0061% ( 1) 00:15:51.656 3.256 - 3.270: 0.0121% ( 1) 00:15:51.656 3.270 - 3.283: 0.0486% ( 6) 00:15:51.656 3.283 - 3.297: 0.1881% ( 23) 00:15:51.656 3.297 - 3.311: 0.4430% ( 42) 00:15:51.656 3.311 - 3.325: 0.7950% ( 58) 00:15:51.656 3.325 - 3.339: 2.0453% ( 206) 00:15:51.656 3.339 - 3.353: 5.5532% ( 578) 00:15:51.656 3.353 - 3.367: 11.0578% ( 907) 00:15:51.656 3.367 - 3.381: 17.0419% ( 986) 00:15:51.656 3.381 - 3.395: 23.4812% ( 1061) 00:15:51.656 3.395 - 3.409: 29.7566% ( 1034) 00:15:51.656 3.409 - 3.423: 34.9578% ( 857) 00:15:51.656 3.423 - 3.437: 39.9891% ( 829) 00:15:51.656 3.437 - 3.450: 45.8518% ( 966) 00:15:51.656 3.450 - 3.464: 50.1608% ( 710) 00:15:51.656 3.464 - 3.478: 53.8690% ( 611) 00:15:51.656 3.478 - 3.492: 59.4344% ( 917) 00:15:51.656 3.492 - 3.506: 66.2924% ( 1130) 00:15:51.656 3.506 - 3.520: 70.9899% ( 774) 00:15:51.656 3.520 - 3.534: 75.3171% ( 713) 00:15:51.656 3.534 - 3.548: 79.9296% ( 760) 00:15:51.656 3.548 - 3.562: 83.1644% ( 533) 00:15:51.656 3.562 - 3.590: 86.3567% ( 526) 00:15:51.656 3.590 - 3.617: 87.4067% ( 173) 00:15:51.656 3.617 - 3.645: 88.1957% ( 130) 00:15:51.656 3.645 - 3.673: 89.7433% ( 255) 00:15:51.656 3.673 - 3.701: 91.7218% ( 326) 00:15:51.656 3.701 - 3.729: 93.3969% ( 276) 00:15:51.656 3.729 - 3.757: 95.2661% ( 308) 00:15:51.656 3.757 - 3.784: 96.8744% ( 265) 00:15:51.656 3.784 - 3.812: 97.9244% ( 173) 00:15:51.656 3.812 - 3.840: 98.7194% ( 131) 00:15:51.656 3.840 - 3.868: 99.2414% ( 86) 00:15:51.656 3.868 - 3.896: 99.4659% ( 37) 00:15:51.656 3.896 - 3.923: 99.5691% ( 17) 00:15:51.656 3.923 - 3.951: 99.6055% ( 6) 00:15:51.656 3.951 - 3.979: 99.6237% ( 3) 00:15:51.656 3.979 - 4.007: 99.6298% ( 1) 00:15:51.656 4.007 - 4.035: 99.6359% ( 1) 00:15:51.656 4.035 - 4.063: 99.6419% ( 1) 00:15:51.656 5.231 - 5.259: 99.6480% ( 1) 00:15:51.656 5.287 - 5.315: 99.6541% ( 1) 00:15:51.656 5.315 - 5.343: 99.6601% ( 1) 00:15:51.656 5.343 - 5.370: 99.6662% ( 1) 00:15:51.656 5.398 - 5.426: 99.6723% ( 1) 00:15:51.656 5.426 - 5.454: 99.6844% ( 2) 00:15:51.656 5.454 - 5.482: 99.6905% ( 1) 00:15:51.656 5.482 - 5.510: 99.7087% ( 3) 00:15:51.656 5.760 - 5.788: 99.7148% ( 1) 00:15:51.656 5.899 - 5.927: 99.7208% ( 1) 00:15:51.656 6.010 - 6.038: 99.7269% ( 1) 00:15:51.656 6.289 - 6.317: 99.7330% ( 1) 00:15:51.656 6.317 - 6.344: 99.7390% ( 1) 00:15:51.656 6.650 - 6.678: 99.7451% ( 1) 00:15:51.656 6.762 - 6.790: 99.7512% ( 1) 00:15:51.656 6.790 - 6.817: 99.7572% ( 1) 00:15:51.656 6.873 - 6.901: 99.7633% ( 1) 00:15:51.656 6.901 - 6.929: 99.7694% ( 1) 00:15:51.656 6.929 - 6.957: 99.7754% ( 1) 00:15:51.656 7.040 - 7.068: 99.7815% ( 1) 00:15:51.656 7.235 - 7.290: 99.7876% ( 1) 00:15:51.656 7.346 - 7.402: 99.7937% ( 1) 00:15:51.656 7.569 - 7.624: 99.7997% ( 1) 00:15:51.656 7.624 - 7.680: 99.8058% ( 1) 00:15:51.656 7.680 - 7.736: 99.8119% ( 1) 00:15:51.656 7.736 - 7.791: 99.8179% ( 1) 00:15:51.656 7.791 - 7.847: 99.8240% ( 1) 00:15:51.656 7.847 - 7.903: 99.8301% ( 1) 00:15:51.656 7.958 - 8.014: 99.8422% ( 2) 00:15:51.656 8.181 - 8.237: 99.8543% ( 2) 00:15:51.656 8.292 - 8.348: 99.8604% ( 1) 00:15:51.656 8.459 - 8.515: 99.8665% ( 1) 00:15:51.656 8.515 - 8.570: 99.8725% ( 1) 00:15:51.656 8.737 - 8.793: 99.8786% ( 1) 00:15:51.656 8.793 - 8.849: 99.8847% ( 1) 00:15:51.656 9.850 - 9.906: 99.8908% ( 1) 00:15:51.656 3989.148 - 4017.642: 100.0000% ( 18) 00:15:51.656 00:15:51.656 Complete histogram 00:15:51.656 ================== 00:15:51.656 Ra[2024-07-13 00:41:03.082594] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:51.656 nge in us Cumulative Count 00:15:51.656 1.795 - 1.809: 0.0121% ( 2) 00:15:51.656 1.809 - 1.823: 0.0668% ( 9) 00:15:51.656 1.823 - 1.837: 0.9407% ( 144) 00:15:51.656 1.837 - 1.850: 2.3184% ( 227) 00:15:51.656 1.850 - 1.864: 3.2409% ( 152) 00:15:51.656 1.864 - 1.878: 15.6218% ( 2040) 00:15:51.656 1.878 - 1.892: 73.3689% ( 9515) 00:15:51.656 1.892 - 1.906: 92.0677% ( 3081) 00:15:51.656 1.906 - 1.920: 95.0719% ( 495) 00:15:51.656 1.920 - 1.934: 96.0733% ( 165) 00:15:51.656 1.934 - 1.948: 96.5649% ( 81) 00:15:51.656 1.948 - 1.962: 97.8819% ( 217) 00:15:51.656 1.962 - 1.976: 98.9136% ( 170) 00:15:51.656 1.976 - 1.990: 99.2414% ( 54) 00:15:51.656 1.990 - 2.003: 99.2899% ( 8) 00:15:51.656 2.003 - 2.017: 99.3263% ( 6) 00:15:51.656 2.017 - 2.031: 99.3385% ( 2) 00:15:51.656 2.031 - 2.045: 99.3506% ( 2) 00:15:51.656 2.045 - 2.059: 99.3688% ( 3) 00:15:51.656 2.059 - 2.073: 99.3749% ( 1) 00:15:51.656 2.101 - 2.115: 99.3810% ( 1) 00:15:51.656 2.129 - 2.143: 99.3870% ( 1) 00:15:51.656 2.296 - 2.310: 99.3931% ( 1) 00:15:51.656 2.393 - 2.407: 99.3992% ( 1) 00:15:51.656 3.729 - 3.757: 99.4052% ( 1) 00:15:51.656 4.007 - 4.035: 99.4113% ( 1) 00:15:51.656 4.035 - 4.063: 99.4234% ( 2) 00:15:51.656 4.090 - 4.118: 99.4295% ( 1) 00:15:51.656 4.118 - 4.146: 99.4356% ( 1) 00:15:51.656 4.842 - 4.870: 99.4416% ( 1) 00:15:51.656 4.870 - 4.897: 99.4477% ( 1) 00:15:51.656 4.897 - 4.925: 99.4538% ( 1) 00:15:51.656 5.009 - 5.037: 99.4599% ( 1) 00:15:51.656 5.064 - 5.092: 99.4659% ( 1) 00:15:51.656 5.120 - 5.148: 99.4720% ( 1) 00:15:51.656 5.259 - 5.287: 99.4841% ( 2) 00:15:51.656 5.677 - 5.704: 99.4902% ( 1) 00:15:51.656 6.038 - 6.066: 99.4963% ( 1) 00:15:51.656 6.261 - 6.289: 99.5023% ( 1) 00:15:51.656 6.372 - 6.400: 99.5084% ( 1) 00:15:51.656 6.483 - 6.511: 99.5145% ( 1) 00:15:51.656 6.734 - 6.762: 99.5205% ( 1) 00:15:51.656 7.235 - 7.290: 99.5266% ( 1) 00:15:51.656 7.346 - 7.402: 99.5327% ( 1) 00:15:51.656 7.791 - 7.847: 99.5388% ( 1) 00:15:51.656 3006.108 - 3020.355: 99.5448% ( 1) 00:15:51.656 3989.148 - 4017.642: 99.9939% ( 74) 00:15:51.656 4986.435 - 5014.929: 100.0000% ( 1) 00:15:51.656 00:15:51.656 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:51.656 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:51.656 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:51.656 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:51.656 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:51.914 [ 00:15:51.914 { 00:15:51.914 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:51.914 "subtype": "Discovery", 00:15:51.914 "listen_addresses": [], 00:15:51.914 "allow_any_host": true, 00:15:51.914 "hosts": [] 00:15:51.914 }, 00:15:51.914 { 00:15:51.914 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:51.914 "subtype": "NVMe", 00:15:51.914 "listen_addresses": [ 00:15:51.914 { 00:15:51.914 "trtype": "VFIOUSER", 00:15:51.914 "adrfam": "IPv4", 00:15:51.914 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:51.914 "trsvcid": "0" 00:15:51.914 } 00:15:51.914 ], 00:15:51.914 "allow_any_host": true, 00:15:51.914 "hosts": [], 00:15:51.914 "serial_number": "SPDK1", 00:15:51.914 "model_number": "SPDK bdev Controller", 00:15:51.914 "max_namespaces": 32, 00:15:51.914 "min_cntlid": 1, 00:15:51.914 "max_cntlid": 65519, 00:15:51.914 "namespaces": [ 00:15:51.914 { 00:15:51.914 "nsid": 1, 00:15:51.914 "bdev_name": "Malloc1", 00:15:51.914 "name": "Malloc1", 00:15:51.914 "nguid": "C1FC86D990EE4A929683BA6564AFD774", 00:15:51.914 "uuid": "c1fc86d9-90ee-4a92-9683-ba6564afd774" 00:15:51.914 } 00:15:51.914 ] 00:15:51.914 }, 00:15:51.914 { 00:15:51.914 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:51.914 "subtype": "NVMe", 00:15:51.914 "listen_addresses": [ 00:15:51.914 { 00:15:51.914 "trtype": "VFIOUSER", 00:15:51.914 "adrfam": "IPv4", 00:15:51.914 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:51.914 "trsvcid": "0" 00:15:51.914 } 00:15:51.914 ], 00:15:51.914 "allow_any_host": true, 00:15:51.914 "hosts": [], 00:15:51.914 "serial_number": "SPDK2", 00:15:51.914 "model_number": "SPDK bdev Controller", 00:15:51.914 "max_namespaces": 32, 00:15:51.914 "min_cntlid": 1, 00:15:51.914 "max_cntlid": 65519, 00:15:51.914 "namespaces": [ 00:15:51.914 { 00:15:51.914 "nsid": 1, 00:15:51.914 "bdev_name": "Malloc2", 00:15:51.914 "name": "Malloc2", 00:15:51.914 "nguid": "E170F87D5A5947C28A4AD1C0CFAC2D84", 00:15:51.914 "uuid": "e170f87d-5a59-47c2-8a4a-d1c0cfac2d84" 00:15:51.914 } 00:15:51.914 ] 00:15:51.914 } 00:15:51.914 ] 00:15:51.914 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:51.914 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1340343 00:15:51.914 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:51.914 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:51.914 00:41:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:51.914 00:41:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:51.914 00:41:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:51.914 00:41:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:51.914 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:51.914 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:51.914 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.914 [2024-07-13 00:41:03.454635] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:52.172 Malloc3 00:15:52.172 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:52.172 [2024-07-13 00:41:03.667238] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:52.172 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:52.172 Asynchronous Event Request test 00:15:52.172 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:52.172 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:52.172 Registering asynchronous event callbacks... 00:15:52.172 Starting namespace attribute notice tests for all controllers... 00:15:52.172 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:52.172 aer_cb - Changed Namespace 00:15:52.172 Cleaning up... 00:15:52.431 [ 00:15:52.432 { 00:15:52.432 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:52.432 "subtype": "Discovery", 00:15:52.432 "listen_addresses": [], 00:15:52.432 "allow_any_host": true, 00:15:52.432 "hosts": [] 00:15:52.432 }, 00:15:52.432 { 00:15:52.432 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:52.432 "subtype": "NVMe", 00:15:52.432 "listen_addresses": [ 00:15:52.432 { 00:15:52.432 "trtype": "VFIOUSER", 00:15:52.432 "adrfam": "IPv4", 00:15:52.432 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:52.432 "trsvcid": "0" 00:15:52.432 } 00:15:52.432 ], 00:15:52.432 "allow_any_host": true, 00:15:52.432 "hosts": [], 00:15:52.432 "serial_number": "SPDK1", 00:15:52.432 "model_number": "SPDK bdev Controller", 00:15:52.432 "max_namespaces": 32, 00:15:52.432 "min_cntlid": 1, 00:15:52.432 "max_cntlid": 65519, 00:15:52.432 "namespaces": [ 00:15:52.432 { 00:15:52.432 "nsid": 1, 00:15:52.432 "bdev_name": "Malloc1", 00:15:52.432 "name": "Malloc1", 00:15:52.432 "nguid": "C1FC86D990EE4A929683BA6564AFD774", 00:15:52.432 "uuid": "c1fc86d9-90ee-4a92-9683-ba6564afd774" 00:15:52.432 }, 00:15:52.432 { 00:15:52.432 "nsid": 2, 00:15:52.432 "bdev_name": "Malloc3", 00:15:52.432 "name": "Malloc3", 00:15:52.432 "nguid": "1B31916B7AD94791AC2E26428B223ECC", 00:15:52.432 "uuid": "1b31916b-7ad9-4791-ac2e-26428b223ecc" 00:15:52.432 } 00:15:52.432 ] 00:15:52.432 }, 00:15:52.432 { 00:15:52.432 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:52.432 "subtype": "NVMe", 00:15:52.432 "listen_addresses": [ 00:15:52.432 { 00:15:52.432 "trtype": "VFIOUSER", 00:15:52.432 "adrfam": "IPv4", 00:15:52.432 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:52.432 "trsvcid": "0" 00:15:52.432 } 00:15:52.432 ], 00:15:52.432 "allow_any_host": true, 00:15:52.432 "hosts": [], 00:15:52.432 "serial_number": "SPDK2", 00:15:52.432 "model_number": "SPDK bdev Controller", 00:15:52.432 "max_namespaces": 32, 00:15:52.432 "min_cntlid": 1, 00:15:52.432 "max_cntlid": 65519, 00:15:52.432 "namespaces": [ 00:15:52.432 { 00:15:52.432 "nsid": 1, 00:15:52.432 "bdev_name": "Malloc2", 00:15:52.432 "name": "Malloc2", 00:15:52.432 "nguid": "E170F87D5A5947C28A4AD1C0CFAC2D84", 00:15:52.432 "uuid": "e170f87d-5a59-47c2-8a4a-d1c0cfac2d84" 00:15:52.432 } 00:15:52.432 ] 00:15:52.432 } 00:15:52.432 ] 00:15:52.432 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1340343 00:15:52.432 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:52.432 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:52.432 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:52.432 00:41:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:52.432 [2024-07-13 00:41:03.901044] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:52.432 [2024-07-13 00:41:03.901090] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340358 ] 00:15:52.432 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.432 [2024-07-13 00:41:03.931612] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:52.432 [2024-07-13 00:41:03.942122] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:52.432 [2024-07-13 00:41:03.942145] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9a33095000 00:15:52.432 [2024-07-13 00:41:03.943125] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:52.432 [2024-07-13 00:41:03.944128] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:52.432 [2024-07-13 00:41:03.945134] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:52.432 [2024-07-13 00:41:03.946143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:52.432 [2024-07-13 00:41:03.947145] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:52.432 [2024-07-13 00:41:03.948151] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:52.432 [2024-07-13 00:41:03.949159] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:52.432 [2024-07-13 00:41:03.950166] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:52.432 [2024-07-13 00:41:03.951173] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:52.432 [2024-07-13 00:41:03.951183] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9a31e5b000 00:15:52.432 [2024-07-13 00:41:03.952123] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:52.432 [2024-07-13 00:41:03.960656] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:52.432 [2024-07-13 00:41:03.960683] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:52.432 [2024-07-13 00:41:03.965771] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:52.432 [2024-07-13 00:41:03.965805] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:52.432 [2024-07-13 00:41:03.965871] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:52.432 [2024-07-13 00:41:03.965887] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:52.432 [2024-07-13 00:41:03.965891] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:52.432 [2024-07-13 00:41:03.966776] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:52.432 [2024-07-13 00:41:03.966785] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:52.432 [2024-07-13 00:41:03.966791] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:52.432 [2024-07-13 00:41:03.967781] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:52.432 [2024-07-13 00:41:03.967789] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:52.432 [2024-07-13 00:41:03.967796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:52.432 [2024-07-13 00:41:03.968786] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:52.432 [2024-07-13 00:41:03.968794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:52.432 [2024-07-13 00:41:03.969796] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:52.432 [2024-07-13 00:41:03.969805] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:52.432 [2024-07-13 00:41:03.969809] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:52.432 [2024-07-13 00:41:03.969814] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:52.432 [2024-07-13 00:41:03.969919] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:52.432 [2024-07-13 00:41:03.969923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:52.432 [2024-07-13 00:41:03.969927] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:52.432 [2024-07-13 00:41:03.970805] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:52.432 [2024-07-13 00:41:03.971814] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:52.432 [2024-07-13 00:41:03.972817] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:52.432 [2024-07-13 00:41:03.973826] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:52.432 [2024-07-13 00:41:03.973863] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:52.432 [2024-07-13 00:41:03.974839] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:52.432 [2024-07-13 00:41:03.974847] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:52.432 [2024-07-13 00:41:03.974851] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:52.432 [2024-07-13 00:41:03.974868] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:52.432 [2024-07-13 00:41:03.974876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:52.432 [2024-07-13 00:41:03.974887] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:52.432 [2024-07-13 00:41:03.974891] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:52.432 [2024-07-13 00:41:03.974902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:52.432 [2024-07-13 00:41:03.985234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:52.432 [2024-07-13 00:41:03.985245] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:52.432 [2024-07-13 00:41:03.985253] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:52.433 [2024-07-13 00:41:03.985260] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:52.433 [2024-07-13 00:41:03.985264] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:52.433 [2024-07-13 00:41:03.985267] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:52.433 [2024-07-13 00:41:03.985271] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:52.433 [2024-07-13 00:41:03.985275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:52.433 [2024-07-13 00:41:03.985283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:52.433 [2024-07-13 00:41:03.985293] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:52.692 [2024-07-13 00:41:03.993231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:52.692 [2024-07-13 00:41:03.993244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.692 [2024-07-13 00:41:03.993252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.692 [2024-07-13 00:41:03.993259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.692 [2024-07-13 00:41:03.993266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.692 [2024-07-13 00:41:03.993271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:52.692 [2024-07-13 00:41:03.993278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:52.692 [2024-07-13 00:41:03.993287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:52.692 [2024-07-13 00:41:04.001229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:52.692 [2024-07-13 00:41:04.001236] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:52.692 [2024-07-13 00:41:04.001241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:52.692 [2024-07-13 00:41:04.001247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:52.692 [2024-07-13 00:41:04.001252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:52.692 [2024-07-13 00:41:04.001260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:52.692 [2024-07-13 00:41:04.009232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:52.692 [2024-07-13 00:41:04.009282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:52.692 [2024-07-13 00:41:04.009290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:52.692 [2024-07-13 00:41:04.009296] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:52.692 [2024-07-13 00:41:04.009302] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:52.692 [2024-07-13 00:41:04.009308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:52.693 [2024-07-13 00:41:04.017229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:52.693 [2024-07-13 00:41:04.017238] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:52.693 [2024-07-13 00:41:04.017249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:52.693 [2024-07-13 00:41:04.017256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:52.693 [2024-07-13 00:41:04.017262] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:52.693 [2024-07-13 00:41:04.017266] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:52.693 [2024-07-13 00:41:04.017271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:52.693 [2024-07-13 00:41:04.025229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:52.693 [2024-07-13 00:41:04.025242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:52.693 [2024-07-13 00:41:04.025249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:52.693 [2024-07-13 00:41:04.025255] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:52.693 [2024-07-13 00:41:04.025259] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:52.693 [2024-07-13 00:41:04.025265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:52.693 [2024-07-13 00:41:04.032256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:52.693 [2024-07-13 00:41:04.032265] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:52.693 [2024-07-13 00:41:04.032271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:52.693 [2024-07-13 00:41:04.032279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:52.693 [2024-07-13 00:41:04.032284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:52.693 [2024-07-13 00:41:04.032288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:52.693 [2024-07-13 00:41:04.032293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:52.693 [2024-07-13 00:41:04.032297] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:52.693 [2024-07-13 00:41:04.032301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:52.693 [2024-07-13 00:41:04.032306] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:52.693 [2024-07-13 00:41:04.032323] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:52.693 [2024-07-13 00:41:04.041232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:52.693 [2024-07-13 00:41:04.041252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:52.693 [2024-07-13 00:41:04.049230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:52.693 [2024-07-13 00:41:04.049241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:52.693 [2024-07-13 00:41:04.057228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:52.693 [2024-07-13 00:41:04.057240] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:52.693 [2024-07-13 00:41:04.065230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:52.693 [2024-07-13 00:41:04.065247] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:52.693 [2024-07-13 00:41:04.065251] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:52.693 [2024-07-13 00:41:04.065255] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:52.693 [2024-07-13 00:41:04.065258] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:52.693 [2024-07-13 00:41:04.065264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:52.693 [2024-07-13 00:41:04.065270] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:52.693 [2024-07-13 00:41:04.065274] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:52.693 [2024-07-13 00:41:04.065279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:52.693 [2024-07-13 00:41:04.065285] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:52.693 [2024-07-13 00:41:04.065289] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:52.693 [2024-07-13 00:41:04.065294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:52.693 [2024-07-13 00:41:04.065300] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:52.693 [2024-07-13 00:41:04.065304] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:52.693 [2024-07-13 00:41:04.065310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:52.693 [2024-07-13 00:41:04.073231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:52.693 [2024-07-13 00:41:04.073244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:52.693 [2024-07-13 00:41:04.073253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:52.693 [2024-07-13 00:41:04.073259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:52.693 ===================================================== 00:15:52.693 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:52.693 ===================================================== 00:15:52.693 Controller Capabilities/Features 00:15:52.693 ================================ 00:15:52.693 Vendor ID: 4e58 00:15:52.693 Subsystem Vendor ID: 4e58 00:15:52.693 Serial Number: SPDK2 00:15:52.693 Model Number: SPDK bdev Controller 00:15:52.693 Firmware Version: 24.09 00:15:52.693 Recommended Arb Burst: 6 00:15:52.693 IEEE OUI Identifier: 8d 6b 50 00:15:52.693 Multi-path I/O 00:15:52.693 May have multiple subsystem ports: Yes 00:15:52.693 May have multiple controllers: Yes 00:15:52.693 Associated with SR-IOV VF: No 00:15:52.693 Max Data Transfer Size: 131072 00:15:52.693 Max Number of Namespaces: 32 00:15:52.693 Max Number of I/O Queues: 127 00:15:52.693 NVMe Specification Version (VS): 1.3 00:15:52.693 NVMe Specification Version (Identify): 1.3 00:15:52.693 Maximum Queue Entries: 256 00:15:52.693 Contiguous Queues Required: Yes 00:15:52.693 Arbitration Mechanisms Supported 00:15:52.693 Weighted Round Robin: Not Supported 00:15:52.693 Vendor Specific: Not Supported 00:15:52.693 Reset Timeout: 15000 ms 00:15:52.693 Doorbell Stride: 4 bytes 00:15:52.693 NVM Subsystem Reset: Not Supported 00:15:52.693 Command Sets Supported 00:15:52.693 NVM Command Set: Supported 00:15:52.693 Boot Partition: Not Supported 00:15:52.693 Memory Page Size Minimum: 4096 bytes 00:15:52.693 Memory Page Size Maximum: 4096 bytes 00:15:52.693 Persistent Memory Region: Not Supported 00:15:52.693 Optional Asynchronous Events Supported 00:15:52.693 Namespace Attribute Notices: Supported 00:15:52.693 Firmware Activation Notices: Not Supported 00:15:52.693 ANA Change Notices: Not Supported 00:15:52.693 PLE Aggregate Log Change Notices: Not Supported 00:15:52.693 LBA Status Info Alert Notices: Not Supported 00:15:52.693 EGE Aggregate Log Change Notices: Not Supported 00:15:52.693 Normal NVM Subsystem Shutdown event: Not Supported 00:15:52.693 Zone Descriptor Change Notices: Not Supported 00:15:52.693 Discovery Log Change Notices: Not Supported 00:15:52.693 Controller Attributes 00:15:52.693 128-bit Host Identifier: Supported 00:15:52.693 Non-Operational Permissive Mode: Not Supported 00:15:52.693 NVM Sets: Not Supported 00:15:52.693 Read Recovery Levels: Not Supported 00:15:52.693 Endurance Groups: Not Supported 00:15:52.693 Predictable Latency Mode: Not Supported 00:15:52.693 Traffic Based Keep ALive: Not Supported 00:15:52.693 Namespace Granularity: Not Supported 00:15:52.693 SQ Associations: Not Supported 00:15:52.693 UUID List: Not Supported 00:15:52.693 Multi-Domain Subsystem: Not Supported 00:15:52.693 Fixed Capacity Management: Not Supported 00:15:52.693 Variable Capacity Management: Not Supported 00:15:52.693 Delete Endurance Group: Not Supported 00:15:52.693 Delete NVM Set: Not Supported 00:15:52.693 Extended LBA Formats Supported: Not Supported 00:15:52.693 Flexible Data Placement Supported: Not Supported 00:15:52.693 00:15:52.693 Controller Memory Buffer Support 00:15:52.693 ================================ 00:15:52.693 Supported: No 00:15:52.693 00:15:52.693 Persistent Memory Region Support 00:15:52.693 ================================ 00:15:52.693 Supported: No 00:15:52.693 00:15:52.693 Admin Command Set Attributes 00:15:52.693 ============================ 00:15:52.693 Security Send/Receive: Not Supported 00:15:52.693 Format NVM: Not Supported 00:15:52.693 Firmware Activate/Download: Not Supported 00:15:52.693 Namespace Management: Not Supported 00:15:52.693 Device Self-Test: Not Supported 00:15:52.693 Directives: Not Supported 00:15:52.693 NVMe-MI: Not Supported 00:15:52.693 Virtualization Management: Not Supported 00:15:52.694 Doorbell Buffer Config: Not Supported 00:15:52.694 Get LBA Status Capability: Not Supported 00:15:52.694 Command & Feature Lockdown Capability: Not Supported 00:15:52.694 Abort Command Limit: 4 00:15:52.694 Async Event Request Limit: 4 00:15:52.694 Number of Firmware Slots: N/A 00:15:52.694 Firmware Slot 1 Read-Only: N/A 00:15:52.694 Firmware Activation Without Reset: N/A 00:15:52.694 Multiple Update Detection Support: N/A 00:15:52.694 Firmware Update Granularity: No Information Provided 00:15:52.694 Per-Namespace SMART Log: No 00:15:52.694 Asymmetric Namespace Access Log Page: Not Supported 00:15:52.694 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:52.694 Command Effects Log Page: Supported 00:15:52.694 Get Log Page Extended Data: Supported 00:15:52.694 Telemetry Log Pages: Not Supported 00:15:52.694 Persistent Event Log Pages: Not Supported 00:15:52.694 Supported Log Pages Log Page: May Support 00:15:52.694 Commands Supported & Effects Log Page: Not Supported 00:15:52.694 Feature Identifiers & Effects Log Page:May Support 00:15:52.694 NVMe-MI Commands & Effects Log Page: May Support 00:15:52.694 Data Area 4 for Telemetry Log: Not Supported 00:15:52.694 Error Log Page Entries Supported: 128 00:15:52.694 Keep Alive: Supported 00:15:52.694 Keep Alive Granularity: 10000 ms 00:15:52.694 00:15:52.694 NVM Command Set Attributes 00:15:52.694 ========================== 00:15:52.694 Submission Queue Entry Size 00:15:52.694 Max: 64 00:15:52.694 Min: 64 00:15:52.694 Completion Queue Entry Size 00:15:52.694 Max: 16 00:15:52.694 Min: 16 00:15:52.694 Number of Namespaces: 32 00:15:52.694 Compare Command: Supported 00:15:52.694 Write Uncorrectable Command: Not Supported 00:15:52.694 Dataset Management Command: Supported 00:15:52.694 Write Zeroes Command: Supported 00:15:52.694 Set Features Save Field: Not Supported 00:15:52.694 Reservations: Not Supported 00:15:52.694 Timestamp: Not Supported 00:15:52.694 Copy: Supported 00:15:52.694 Volatile Write Cache: Present 00:15:52.694 Atomic Write Unit (Normal): 1 00:15:52.694 Atomic Write Unit (PFail): 1 00:15:52.694 Atomic Compare & Write Unit: 1 00:15:52.694 Fused Compare & Write: Supported 00:15:52.694 Scatter-Gather List 00:15:52.694 SGL Command Set: Supported (Dword aligned) 00:15:52.694 SGL Keyed: Not Supported 00:15:52.694 SGL Bit Bucket Descriptor: Not Supported 00:15:52.694 SGL Metadata Pointer: Not Supported 00:15:52.694 Oversized SGL: Not Supported 00:15:52.694 SGL Metadata Address: Not Supported 00:15:52.694 SGL Offset: Not Supported 00:15:52.694 Transport SGL Data Block: Not Supported 00:15:52.694 Replay Protected Memory Block: Not Supported 00:15:52.694 00:15:52.694 Firmware Slot Information 00:15:52.694 ========================= 00:15:52.694 Active slot: 1 00:15:52.694 Slot 1 Firmware Revision: 24.09 00:15:52.694 00:15:52.694 00:15:52.694 Commands Supported and Effects 00:15:52.694 ============================== 00:15:52.694 Admin Commands 00:15:52.694 -------------- 00:15:52.694 Get Log Page (02h): Supported 00:15:52.694 Identify (06h): Supported 00:15:52.694 Abort (08h): Supported 00:15:52.694 Set Features (09h): Supported 00:15:52.694 Get Features (0Ah): Supported 00:15:52.694 Asynchronous Event Request (0Ch): Supported 00:15:52.694 Keep Alive (18h): Supported 00:15:52.694 I/O Commands 00:15:52.694 ------------ 00:15:52.694 Flush (00h): Supported LBA-Change 00:15:52.694 Write (01h): Supported LBA-Change 00:15:52.694 Read (02h): Supported 00:15:52.694 Compare (05h): Supported 00:15:52.694 Write Zeroes (08h): Supported LBA-Change 00:15:52.694 Dataset Management (09h): Supported LBA-Change 00:15:52.694 Copy (19h): Supported LBA-Change 00:15:52.694 00:15:52.694 Error Log 00:15:52.694 ========= 00:15:52.694 00:15:52.694 Arbitration 00:15:52.694 =========== 00:15:52.694 Arbitration Burst: 1 00:15:52.694 00:15:52.694 Power Management 00:15:52.694 ================ 00:15:52.694 Number of Power States: 1 00:15:52.694 Current Power State: Power State #0 00:15:52.694 Power State #0: 00:15:52.694 Max Power: 0.00 W 00:15:52.694 Non-Operational State: Operational 00:15:52.694 Entry Latency: Not Reported 00:15:52.694 Exit Latency: Not Reported 00:15:52.694 Relative Read Throughput: 0 00:15:52.694 Relative Read Latency: 0 00:15:52.694 Relative Write Throughput: 0 00:15:52.694 Relative Write Latency: 0 00:15:52.694 Idle Power: Not Reported 00:15:52.694 Active Power: Not Reported 00:15:52.694 Non-Operational Permissive Mode: Not Supported 00:15:52.694 00:15:52.694 Health Information 00:15:52.694 ================== 00:15:52.694 Critical Warnings: 00:15:52.694 Available Spare Space: OK 00:15:52.694 Temperature: OK 00:15:52.694 Device Reliability: OK 00:15:52.694 Read Only: No 00:15:52.694 Volatile Memory Backup: OK 00:15:52.694 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:52.694 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:52.694 Available Spare: 0% 00:15:52.694 Available Sp[2024-07-13 00:41:04.073345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:52.694 [2024-07-13 00:41:04.081232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:52.694 [2024-07-13 00:41:04.081264] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:52.694 [2024-07-13 00:41:04.081273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.694 [2024-07-13 00:41:04.081279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.694 [2024-07-13 00:41:04.081284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.694 [2024-07-13 00:41:04.081290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.694 [2024-07-13 00:41:04.081337] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:52.694 [2024-07-13 00:41:04.081347] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:52.694 [2024-07-13 00:41:04.082341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:52.694 [2024-07-13 00:41:04.082382] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:52.694 [2024-07-13 00:41:04.082388] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:52.694 [2024-07-13 00:41:04.083348] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:52.694 [2024-07-13 00:41:04.083358] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:52.694 [2024-07-13 00:41:04.083403] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:52.694 [2024-07-13 00:41:04.084378] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:52.694 are Threshold: 0% 00:15:52.694 Life Percentage Used: 0% 00:15:52.694 Data Units Read: 0 00:15:52.694 Data Units Written: 0 00:15:52.694 Host Read Commands: 0 00:15:52.694 Host Write Commands: 0 00:15:52.694 Controller Busy Time: 0 minutes 00:15:52.694 Power Cycles: 0 00:15:52.694 Power On Hours: 0 hours 00:15:52.694 Unsafe Shutdowns: 0 00:15:52.694 Unrecoverable Media Errors: 0 00:15:52.694 Lifetime Error Log Entries: 0 00:15:52.694 Warning Temperature Time: 0 minutes 00:15:52.694 Critical Temperature Time: 0 minutes 00:15:52.694 00:15:52.694 Number of Queues 00:15:52.694 ================ 00:15:52.694 Number of I/O Submission Queues: 127 00:15:52.694 Number of I/O Completion Queues: 127 00:15:52.694 00:15:52.694 Active Namespaces 00:15:52.694 ================= 00:15:52.694 Namespace ID:1 00:15:52.694 Error Recovery Timeout: Unlimited 00:15:52.694 Command Set Identifier: NVM (00h) 00:15:52.694 Deallocate: Supported 00:15:52.694 Deallocated/Unwritten Error: Not Supported 00:15:52.694 Deallocated Read Value: Unknown 00:15:52.694 Deallocate in Write Zeroes: Not Supported 00:15:52.694 Deallocated Guard Field: 0xFFFF 00:15:52.694 Flush: Supported 00:15:52.694 Reservation: Supported 00:15:52.694 Namespace Sharing Capabilities: Multiple Controllers 00:15:52.694 Size (in LBAs): 131072 (0GiB) 00:15:52.694 Capacity (in LBAs): 131072 (0GiB) 00:15:52.694 Utilization (in LBAs): 131072 (0GiB) 00:15:52.694 NGUID: E170F87D5A5947C28A4AD1C0CFAC2D84 00:15:52.694 UUID: e170f87d-5a59-47c2-8a4a-d1c0cfac2d84 00:15:52.694 Thin Provisioning: Not Supported 00:15:52.694 Per-NS Atomic Units: Yes 00:15:52.694 Atomic Boundary Size (Normal): 0 00:15:52.694 Atomic Boundary Size (PFail): 0 00:15:52.694 Atomic Boundary Offset: 0 00:15:52.694 Maximum Single Source Range Length: 65535 00:15:52.694 Maximum Copy Length: 65535 00:15:52.694 Maximum Source Range Count: 1 00:15:52.694 NGUID/EUI64 Never Reused: No 00:15:52.694 Namespace Write Protected: No 00:15:52.694 Number of LBA Formats: 1 00:15:52.694 Current LBA Format: LBA Format #00 00:15:52.694 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:52.694 00:15:52.694 00:41:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:52.694 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.952 [2024-07-13 00:41:04.288555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:58.214 Initializing NVMe Controllers 00:15:58.214 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:58.214 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:58.214 Initialization complete. Launching workers. 00:15:58.214 ======================================================== 00:15:58.214 Latency(us) 00:15:58.214 Device Information : IOPS MiB/s Average min max 00:15:58.214 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39934.84 156.00 3205.05 954.89 6743.00 00:15:58.214 ======================================================== 00:15:58.214 Total : 39934.84 156.00 3205.05 954.89 6743.00 00:15:58.214 00:15:58.214 [2024-07-13 00:41:09.395472] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:58.214 00:41:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:58.214 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.214 [2024-07-13 00:41:09.610095] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:03.484 Initializing NVMe Controllers 00:16:03.484 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:03.484 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:03.484 Initialization complete. Launching workers. 00:16:03.484 ======================================================== 00:16:03.484 Latency(us) 00:16:03.484 Device Information : IOPS MiB/s Average min max 00:16:03.484 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39937.97 156.01 3204.80 957.17 7598.27 00:16:03.484 ======================================================== 00:16:03.484 Total : 39937.97 156.01 3204.80 957.17 7598.27 00:16:03.484 00:16:03.484 [2024-07-13 00:41:14.632327] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:03.484 00:41:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:03.484 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.484 [2024-07-13 00:41:14.828731] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:08.748 [2024-07-13 00:41:19.974319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:08.748 Initializing NVMe Controllers 00:16:08.748 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:08.748 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:08.748 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:08.748 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:08.748 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:08.748 Initialization complete. Launching workers. 00:16:08.748 Starting thread on core 2 00:16:08.748 Starting thread on core 3 00:16:08.748 Starting thread on core 1 00:16:08.748 00:41:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:08.748 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.748 [2024-07-13 00:41:20.255613] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:12.059 [2024-07-13 00:41:23.343111] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:12.059 Initializing NVMe Controllers 00:16:12.059 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:12.059 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:12.059 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:12.059 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:12.059 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:12.059 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:12.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:12.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:12.059 Initialization complete. Launching workers. 00:16:12.059 Starting thread on core 1 with urgent priority queue 00:16:12.059 Starting thread on core 2 with urgent priority queue 00:16:12.059 Starting thread on core 3 with urgent priority queue 00:16:12.059 Starting thread on core 0 with urgent priority queue 00:16:12.059 SPDK bdev Controller (SPDK2 ) core 0: 8450.00 IO/s 11.83 secs/100000 ios 00:16:12.059 SPDK bdev Controller (SPDK2 ) core 1: 7177.00 IO/s 13.93 secs/100000 ios 00:16:12.059 SPDK bdev Controller (SPDK2 ) core 2: 7146.67 IO/s 13.99 secs/100000 ios 00:16:12.059 SPDK bdev Controller (SPDK2 ) core 3: 10566.33 IO/s 9.46 secs/100000 ios 00:16:12.059 ======================================================== 00:16:12.059 00:16:12.059 00:41:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:12.059 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.059 [2024-07-13 00:41:23.607073] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:12.059 Initializing NVMe Controllers 00:16:12.059 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:12.059 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:12.059 Namespace ID: 1 size: 0GB 00:16:12.059 Initialization complete. 00:16:12.059 INFO: using host memory buffer for IO 00:16:12.059 Hello world! 00:16:12.059 [2024-07-13 00:41:23.617140] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:12.317 00:41:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:12.317 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.574 [2024-07-13 00:41:23.884170] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:13.508 Initializing NVMe Controllers 00:16:13.508 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:13.508 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:13.508 Initialization complete. Launching workers. 00:16:13.508 submit (in ns) avg, min, max = 6125.5, 3199.1, 4001757.4 00:16:13.508 complete (in ns) avg, min, max = 19323.5, 1759.1, 4008616.5 00:16:13.508 00:16:13.508 Submit histogram 00:16:13.508 ================ 00:16:13.508 Range in us Cumulative Count 00:16:13.508 3.186 - 3.200: 0.0060% ( 1) 00:16:13.508 3.200 - 3.214: 0.0120% ( 1) 00:16:13.508 3.214 - 3.228: 0.0179% ( 1) 00:16:13.508 3.228 - 3.242: 0.0299% ( 2) 00:16:13.508 3.242 - 3.256: 0.0718% ( 7) 00:16:13.508 3.256 - 3.270: 0.1915% ( 20) 00:16:13.508 3.270 - 3.283: 0.4667% ( 46) 00:16:13.508 3.283 - 3.297: 0.7359% ( 45) 00:16:13.508 3.297 - 3.311: 1.0769% ( 57) 00:16:13.508 3.311 - 3.325: 1.5675% ( 82) 00:16:13.508 3.325 - 3.339: 3.0693% ( 251) 00:16:13.508 3.339 - 3.353: 6.9463% ( 648) 00:16:13.508 3.353 - 3.367: 12.3848% ( 909) 00:16:13.508 3.367 - 3.381: 18.2422% ( 979) 00:16:13.508 3.381 - 3.395: 24.6979% ( 1079) 00:16:13.508 3.395 - 3.409: 30.5074% ( 971) 00:16:13.508 3.409 - 3.423: 35.3596% ( 811) 00:16:13.508 3.423 - 3.437: 40.0562% ( 785) 00:16:13.508 3.437 - 3.450: 45.4589% ( 903) 00:16:13.508 3.450 - 3.464: 49.5393% ( 682) 00:16:13.508 3.464 - 3.478: 52.9556% ( 571) 00:16:13.508 3.478 - 3.492: 58.2745% ( 889) 00:16:13.508 3.492 - 3.506: 65.4062% ( 1192) 00:16:13.508 3.506 - 3.520: 69.8756% ( 747) 00:16:13.508 3.520 - 3.534: 74.4226% ( 760) 00:16:13.508 3.534 - 3.548: 79.5740% ( 861) 00:16:13.508 3.548 - 3.562: 83.2655% ( 617) 00:16:13.508 3.562 - 3.590: 86.6280% ( 562) 00:16:13.508 3.590 - 3.617: 87.6212% ( 166) 00:16:13.508 3.617 - 3.645: 88.7579% ( 190) 00:16:13.508 3.645 - 3.673: 90.2537% ( 250) 00:16:13.508 3.673 - 3.701: 91.9768% ( 288) 00:16:13.508 3.701 - 3.729: 93.5503% ( 263) 00:16:13.508 3.729 - 3.757: 95.2256% ( 280) 00:16:13.508 3.757 - 3.784: 96.6854% ( 244) 00:16:13.508 3.784 - 3.812: 97.9538% ( 212) 00:16:13.508 3.812 - 3.840: 98.6837% ( 122) 00:16:13.508 3.840 - 3.868: 99.2102% ( 88) 00:16:13.508 3.868 - 3.896: 99.4436% ( 39) 00:16:13.508 3.896 - 3.923: 99.5991% ( 26) 00:16:13.508 3.923 - 3.951: 99.6410% ( 7) 00:16:13.508 4.007 - 4.035: 99.6470% ( 1) 00:16:13.508 5.203 - 5.231: 99.6530% ( 1) 00:16:13.508 5.816 - 5.843: 99.6590% ( 1) 00:16:13.508 6.094 - 6.122: 99.6650% ( 1) 00:16:13.508 6.122 - 6.150: 99.6709% ( 1) 00:16:13.508 6.372 - 6.400: 99.6769% ( 1) 00:16:13.508 6.511 - 6.539: 99.6829% ( 1) 00:16:13.508 6.539 - 6.567: 99.6889% ( 1) 00:16:13.508 6.623 - 6.650: 99.6949% ( 1) 00:16:13.508 6.734 - 6.762: 99.7008% ( 1) 00:16:13.508 6.790 - 6.817: 99.7068% ( 1) 00:16:13.508 6.817 - 6.845: 99.7128% ( 1) 00:16:13.508 6.873 - 6.901: 99.7188% ( 1) 00:16:13.508 6.984 - 7.012: 99.7308% ( 2) 00:16:13.508 7.040 - 7.068: 99.7427% ( 2) 00:16:13.508 7.068 - 7.096: 99.7547% ( 2) 00:16:13.508 7.096 - 7.123: 99.7607% ( 1) 00:16:13.508 7.123 - 7.179: 99.7726% ( 2) 00:16:13.508 7.179 - 7.235: 99.8026% ( 5) 00:16:13.508 7.346 - 7.402: 99.8085% ( 1) 00:16:13.508 7.402 - 7.457: 99.8145% ( 1) 00:16:13.508 7.457 - 7.513: 99.8205% ( 1) 00:16:13.508 7.569 - 7.624: 99.8325% ( 2) 00:16:13.508 7.680 - 7.736: 99.8385% ( 1) 00:16:13.508 7.736 - 7.791: 99.8504% ( 2) 00:16:13.508 7.791 - 7.847: 99.8564% ( 1) 00:16:13.508 7.847 - 7.903: 99.8624% ( 1) 00:16:13.508 7.903 - 7.958: 99.8684% ( 1) 00:16:13.508 7.958 - 8.014: 99.8744% ( 1) 00:16:13.508 8.125 - 8.181: 99.8863% ( 2) 00:16:13.508 8.237 - 8.292: 99.8923% ( 1) 00:16:13.508 8.459 - 8.515: 99.8983% ( 1) 00:16:13.508 8.570 - 8.626: 99.9043% ( 1) 00:16:13.508 8.626 - 8.682: 99.9103% ( 1) 00:16:13.508 8.737 - 8.793: 99.9162% ( 1) 00:16:13.508 13.690 - 13.746: 99.9222% ( 1) 00:16:13.508 13.802 - 13.857: 99.9282% ( 1) 00:16:13.508 19.478 - 19.590: 99.9342% ( 1) 00:16:13.508 [2024-07-13 00:41:24.980310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:13.508 3989.148 - 4017.642: 100.0000% ( 11) 00:16:13.508 00:16:13.508 Complete histogram 00:16:13.508 ================== 00:16:13.508 Range in us Cumulative Count 00:16:13.508 1.753 - 1.760: 0.0239% ( 4) 00:16:13.508 1.760 - 1.767: 0.1017% ( 13) 00:16:13.508 1.767 - 1.774: 0.3829% ( 47) 00:16:13.508 1.774 - 1.781: 0.7299% ( 58) 00:16:13.508 1.781 - 1.795: 0.9752% ( 41) 00:16:13.508 1.795 - 1.809: 1.2983% ( 54) 00:16:13.508 1.809 - 1.823: 6.9223% ( 940) 00:16:13.508 1.823 - 1.837: 24.8115% ( 2990) 00:16:13.508 1.837 - 1.850: 30.2860% ( 915) 00:16:13.508 1.850 - 1.864: 39.2725% ( 1502) 00:16:13.508 1.864 - 1.878: 79.8911% ( 6789) 00:16:13.508 1.878 - 1.892: 93.4965% ( 2274) 00:16:13.508 1.892 - 1.906: 96.1051% ( 436) 00:16:13.508 1.906 - 1.920: 97.4213% ( 220) 00:16:13.508 1.920 - 1.934: 97.9718% ( 92) 00:16:13.508 1.934 - 1.948: 98.5222% ( 92) 00:16:13.508 1.948 - 1.962: 98.9829% ( 77) 00:16:13.508 1.962 - 1.976: 99.1265% ( 24) 00:16:13.508 1.976 - 1.990: 99.1863% ( 10) 00:16:13.508 1.990 - 2.003: 99.2102% ( 4) 00:16:13.508 2.003 - 2.017: 99.2222% ( 2) 00:16:13.508 2.017 - 2.031: 99.2701% ( 8) 00:16:13.508 2.031 - 2.045: 99.2820% ( 2) 00:16:13.508 2.045 - 2.059: 99.2940% ( 2) 00:16:13.508 2.059 - 2.073: 99.3060% ( 2) 00:16:13.508 2.170 - 2.184: 99.3120% ( 1) 00:16:13.508 2.268 - 2.282: 99.3179% ( 1) 00:16:13.508 2.296 - 2.310: 99.3239% ( 1) 00:16:13.508 2.310 - 2.323: 99.3359% ( 2) 00:16:13.508 2.323 - 2.337: 99.3419% ( 1) 00:16:13.509 2.351 - 2.365: 99.3479% ( 1) 00:16:13.509 2.671 - 2.685: 99.3538% ( 1) 00:16:13.509 3.979 - 4.007: 99.3598% ( 1) 00:16:13.509 4.257 - 4.285: 99.3658% ( 1) 00:16:13.509 4.341 - 4.369: 99.3718% ( 1) 00:16:13.509 4.452 - 4.480: 99.3778% ( 1) 00:16:13.509 4.591 - 4.619: 99.3838% ( 1) 00:16:13.509 4.619 - 4.647: 99.3897% ( 1) 00:16:13.509 4.703 - 4.730: 99.3957% ( 1) 00:16:13.509 4.730 - 4.758: 99.4077% ( 2) 00:16:13.509 4.758 - 4.786: 99.4137% ( 1) 00:16:13.509 4.870 - 4.897: 99.4196% ( 1) 00:16:13.509 4.897 - 4.925: 99.4316% ( 2) 00:16:13.509 5.398 - 5.426: 99.4376% ( 1) 00:16:13.509 5.537 - 5.565: 99.4436% ( 1) 00:16:13.509 5.565 - 5.593: 99.4496% ( 1) 00:16:13.509 5.704 - 5.732: 99.4555% ( 1) 00:16:13.509 5.816 - 5.843: 99.4615% ( 1) 00:16:13.509 5.871 - 5.899: 99.4675% ( 1) 00:16:13.509 5.983 - 6.010: 99.4735% ( 1) 00:16:13.509 6.038 - 6.066: 99.4795% ( 1) 00:16:13.509 6.122 - 6.150: 99.4855% ( 1) 00:16:13.509 6.344 - 6.372: 99.4914% ( 1) 00:16:13.509 6.372 - 6.400: 99.4974% ( 1) 00:16:13.509 6.762 - 6.790: 99.5034% ( 1) 00:16:13.509 6.957 - 6.984: 99.5094% ( 1) 00:16:13.509 7.040 - 7.068: 99.5154% ( 1) 00:16:13.509 7.346 - 7.402: 99.5214% ( 1) 00:16:13.509 7.457 - 7.513: 99.5273% ( 1) 00:16:13.509 7.736 - 7.791: 99.5333% ( 1) 00:16:13.509 8.292 - 8.348: 99.5393% ( 1) 00:16:13.509 8.737 - 8.793: 99.5453% ( 1) 00:16:13.509 12.299 - 12.355: 99.5513% ( 1) 00:16:13.509 12.355 - 12.410: 99.5573% ( 1) 00:16:13.509 12.410 - 12.466: 99.5632% ( 1) 00:16:13.509 3989.148 - 4017.642: 100.0000% ( 73) 00:16:13.509 00:16:13.509 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:13.509 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:13.509 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:13.509 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:13.509 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:13.767 [ 00:16:13.767 { 00:16:13.767 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:13.767 "subtype": "Discovery", 00:16:13.767 "listen_addresses": [], 00:16:13.767 "allow_any_host": true, 00:16:13.767 "hosts": [] 00:16:13.767 }, 00:16:13.767 { 00:16:13.767 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:13.767 "subtype": "NVMe", 00:16:13.767 "listen_addresses": [ 00:16:13.767 { 00:16:13.767 "trtype": "VFIOUSER", 00:16:13.767 "adrfam": "IPv4", 00:16:13.767 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:13.767 "trsvcid": "0" 00:16:13.767 } 00:16:13.767 ], 00:16:13.767 "allow_any_host": true, 00:16:13.767 "hosts": [], 00:16:13.767 "serial_number": "SPDK1", 00:16:13.767 "model_number": "SPDK bdev Controller", 00:16:13.767 "max_namespaces": 32, 00:16:13.767 "min_cntlid": 1, 00:16:13.767 "max_cntlid": 65519, 00:16:13.767 "namespaces": [ 00:16:13.767 { 00:16:13.767 "nsid": 1, 00:16:13.767 "bdev_name": "Malloc1", 00:16:13.767 "name": "Malloc1", 00:16:13.767 "nguid": "C1FC86D990EE4A929683BA6564AFD774", 00:16:13.767 "uuid": "c1fc86d9-90ee-4a92-9683-ba6564afd774" 00:16:13.767 }, 00:16:13.767 { 00:16:13.767 "nsid": 2, 00:16:13.767 "bdev_name": "Malloc3", 00:16:13.767 "name": "Malloc3", 00:16:13.767 "nguid": "1B31916B7AD94791AC2E26428B223ECC", 00:16:13.767 "uuid": "1b31916b-7ad9-4791-ac2e-26428b223ecc" 00:16:13.767 } 00:16:13.767 ] 00:16:13.767 }, 00:16:13.767 { 00:16:13.767 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:13.767 "subtype": "NVMe", 00:16:13.767 "listen_addresses": [ 00:16:13.767 { 00:16:13.767 "trtype": "VFIOUSER", 00:16:13.767 "adrfam": "IPv4", 00:16:13.767 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:13.767 "trsvcid": "0" 00:16:13.767 } 00:16:13.767 ], 00:16:13.767 "allow_any_host": true, 00:16:13.767 "hosts": [], 00:16:13.767 "serial_number": "SPDK2", 00:16:13.767 "model_number": "SPDK bdev Controller", 00:16:13.767 "max_namespaces": 32, 00:16:13.767 "min_cntlid": 1, 00:16:13.767 "max_cntlid": 65519, 00:16:13.767 "namespaces": [ 00:16:13.767 { 00:16:13.767 "nsid": 1, 00:16:13.767 "bdev_name": "Malloc2", 00:16:13.767 "name": "Malloc2", 00:16:13.767 "nguid": "E170F87D5A5947C28A4AD1C0CFAC2D84", 00:16:13.767 "uuid": "e170f87d-5a59-47c2-8a4a-d1c0cfac2d84" 00:16:13.767 } 00:16:13.767 ] 00:16:13.767 } 00:16:13.767 ] 00:16:13.767 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:13.767 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:13.767 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1343810 00:16:13.767 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:13.767 00:41:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:13.767 00:41:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:13.767 00:41:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:13.767 00:41:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:13.767 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:13.767 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:13.767 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.025 [2024-07-13 00:41:25.333618] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:14.025 Malloc4 00:16:14.025 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:14.025 [2024-07-13 00:41:25.575509] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:14.283 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:14.283 Asynchronous Event Request test 00:16:14.283 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:14.283 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:14.283 Registering asynchronous event callbacks... 00:16:14.283 Starting namespace attribute notice tests for all controllers... 00:16:14.283 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:14.283 aer_cb - Changed Namespace 00:16:14.283 Cleaning up... 00:16:14.283 [ 00:16:14.283 { 00:16:14.283 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:14.283 "subtype": "Discovery", 00:16:14.283 "listen_addresses": [], 00:16:14.283 "allow_any_host": true, 00:16:14.283 "hosts": [] 00:16:14.283 }, 00:16:14.283 { 00:16:14.283 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:14.283 "subtype": "NVMe", 00:16:14.283 "listen_addresses": [ 00:16:14.283 { 00:16:14.283 "trtype": "VFIOUSER", 00:16:14.283 "adrfam": "IPv4", 00:16:14.283 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:14.283 "trsvcid": "0" 00:16:14.283 } 00:16:14.283 ], 00:16:14.283 "allow_any_host": true, 00:16:14.283 "hosts": [], 00:16:14.283 "serial_number": "SPDK1", 00:16:14.283 "model_number": "SPDK bdev Controller", 00:16:14.283 "max_namespaces": 32, 00:16:14.283 "min_cntlid": 1, 00:16:14.283 "max_cntlid": 65519, 00:16:14.283 "namespaces": [ 00:16:14.283 { 00:16:14.283 "nsid": 1, 00:16:14.283 "bdev_name": "Malloc1", 00:16:14.283 "name": "Malloc1", 00:16:14.283 "nguid": "C1FC86D990EE4A929683BA6564AFD774", 00:16:14.283 "uuid": "c1fc86d9-90ee-4a92-9683-ba6564afd774" 00:16:14.283 }, 00:16:14.283 { 00:16:14.283 "nsid": 2, 00:16:14.283 "bdev_name": "Malloc3", 00:16:14.283 "name": "Malloc3", 00:16:14.283 "nguid": "1B31916B7AD94791AC2E26428B223ECC", 00:16:14.283 "uuid": "1b31916b-7ad9-4791-ac2e-26428b223ecc" 00:16:14.283 } 00:16:14.283 ] 00:16:14.283 }, 00:16:14.283 { 00:16:14.283 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:14.283 "subtype": "NVMe", 00:16:14.283 "listen_addresses": [ 00:16:14.283 { 00:16:14.283 "trtype": "VFIOUSER", 00:16:14.283 "adrfam": "IPv4", 00:16:14.283 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:14.283 "trsvcid": "0" 00:16:14.283 } 00:16:14.283 ], 00:16:14.283 "allow_any_host": true, 00:16:14.283 "hosts": [], 00:16:14.283 "serial_number": "SPDK2", 00:16:14.283 "model_number": "SPDK bdev Controller", 00:16:14.283 "max_namespaces": 32, 00:16:14.283 "min_cntlid": 1, 00:16:14.283 "max_cntlid": 65519, 00:16:14.283 "namespaces": [ 00:16:14.283 { 00:16:14.283 "nsid": 1, 00:16:14.283 "bdev_name": "Malloc2", 00:16:14.283 "name": "Malloc2", 00:16:14.283 "nguid": "E170F87D5A5947C28A4AD1C0CFAC2D84", 00:16:14.283 "uuid": "e170f87d-5a59-47c2-8a4a-d1c0cfac2d84" 00:16:14.283 }, 00:16:14.283 { 00:16:14.283 "nsid": 2, 00:16:14.283 "bdev_name": "Malloc4", 00:16:14.283 "name": "Malloc4", 00:16:14.283 "nguid": "AB766DEAAD8D4693B957835F3FB21EF5", 00:16:14.283 "uuid": "ab766dea-ad8d-4693-b957-835f3fb21ef5" 00:16:14.283 } 00:16:14.283 ] 00:16:14.283 } 00:16:14.283 ] 00:16:14.283 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1343810 00:16:14.283 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:14.283 00:41:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1336221 00:16:14.283 00:41:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1336221 ']' 00:16:14.283 00:41:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1336221 00:16:14.283 00:41:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:14.283 00:41:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.283 00:41:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1336221 00:16:14.283 00:41:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:14.283 00:41:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:14.283 00:41:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1336221' 00:16:14.283 killing process with pid 1336221 00:16:14.283 00:41:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1336221 00:16:14.283 00:41:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1336221 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1344042 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1344042' 00:16:14.542 Process pid: 1344042 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1344042 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1344042 ']' 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.542 00:41:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:14.801 [2024-07-13 00:41:26.142486] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:14.801 [2024-07-13 00:41:26.143332] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:14.801 [2024-07-13 00:41:26.143371] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.801 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.801 [2024-07-13 00:41:26.212038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.801 [2024-07-13 00:41:26.249398] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.801 [2024-07-13 00:41:26.249439] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.801 [2024-07-13 00:41:26.249446] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.801 [2024-07-13 00:41:26.249451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.801 [2024-07-13 00:41:26.249455] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.801 [2024-07-13 00:41:26.249570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.801 [2024-07-13 00:41:26.249693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.801 [2024-07-13 00:41:26.249803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.801 [2024-07-13 00:41:26.249804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.801 [2024-07-13 00:41:26.323203] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:14.801 [2024-07-13 00:41:26.323612] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:14.801 [2024-07-13 00:41:26.323920] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:14.801 [2024-07-13 00:41:26.324339] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:14.801 [2024-07-13 00:41:26.324873] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:15.735 00:41:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.735 00:41:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:16:15.735 00:41:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:16.668 00:41:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:16.668 00:41:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:16.668 00:41:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:16.668 00:41:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:16.668 00:41:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:16.668 00:41:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:16.926 Malloc1 00:16:16.926 00:41:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:17.184 00:41:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:17.184 00:41:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:17.441 00:41:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:17.441 00:41:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:17.441 00:41:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:17.699 Malloc2 00:16:17.700 00:41:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:17.700 00:41:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:17.971 00:41:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:18.287 00:41:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:18.287 00:41:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1344042 00:16:18.287 00:41:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1344042 ']' 00:16:18.287 00:41:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1344042 00:16:18.287 00:41:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:18.287 00:41:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.287 00:41:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1344042 00:16:18.287 00:41:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:18.287 00:41:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:18.287 00:41:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1344042' 00:16:18.287 killing process with pid 1344042 00:16:18.287 00:41:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1344042 00:16:18.287 00:41:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1344042 00:16:18.547 00:41:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:18.547 00:41:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:18.547 00:16:18.547 real 0m50.719s 00:16:18.547 user 3m20.775s 00:16:18.547 sys 0m3.587s 00:16:18.547 00:41:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:18.547 00:41:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:18.547 ************************************ 00:16:18.547 END TEST nvmf_vfio_user 00:16:18.547 ************************************ 00:16:18.547 00:41:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:18.547 00:41:29 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:18.547 00:41:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:18.547 00:41:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:18.547 00:41:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:18.547 ************************************ 00:16:18.547 START TEST nvmf_vfio_user_nvme_compliance 00:16:18.547 ************************************ 00:16:18.547 00:41:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:18.547 * Looking for test storage... 00:16:18.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:18.547 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1344802 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1344802' 00:16:18.548 Process pid: 1344802 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1344802 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1344802 ']' 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.548 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:18.807 [2024-07-13 00:41:30.106805] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:18.807 [2024-07-13 00:41:30.106857] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.807 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.807 [2024-07-13 00:41:30.171908] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:18.807 [2024-07-13 00:41:30.212089] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.807 [2024-07-13 00:41:30.212131] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.807 [2024-07-13 00:41:30.212138] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.807 [2024-07-13 00:41:30.212148] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.807 [2024-07-13 00:41:30.212153] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.807 [2024-07-13 00:41:30.212207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.807 [2024-07-13 00:41:30.212338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.807 [2024-07-13 00:41:30.212339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.807 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.807 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:16:18.807 00:41:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:20.183 malloc0 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.183 00:41:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:20.183 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.183 00:16:20.183 00:16:20.183 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.183 http://cunit.sourceforge.net/ 00:16:20.183 00:16:20.183 00:16:20.183 Suite: nvme_compliance 00:16:20.183 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-13 00:41:31.527169] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.183 [2024-07-13 00:41:31.528507] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:20.183 [2024-07-13 00:41:31.528524] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:20.183 [2024-07-13 00:41:31.528530] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:20.183 [2024-07-13 00:41:31.530190] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.183 passed 00:16:20.183 Test: admin_identify_ctrlr_verify_fused ...[2024-07-13 00:41:31.607713] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.183 [2024-07-13 00:41:31.610736] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.183 passed 00:16:20.183 Test: admin_identify_ns ...[2024-07-13 00:41:31.688165] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.442 [2024-07-13 00:41:31.747235] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:20.442 [2024-07-13 00:41:31.755232] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:20.442 [2024-07-13 00:41:31.776326] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.442 passed 00:16:20.442 Test: admin_get_features_mandatory_features ...[2024-07-13 00:41:31.854247] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.442 [2024-07-13 00:41:31.860281] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.442 passed 00:16:20.442 Test: admin_get_features_optional_features ...[2024-07-13 00:41:31.938757] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.442 [2024-07-13 00:41:31.941780] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.442 passed 00:16:20.702 Test: admin_set_features_number_of_queues ...[2024-07-13 00:41:32.016779] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.702 [2024-07-13 00:41:32.121308] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.702 passed 00:16:20.702 Test: admin_get_log_page_mandatory_logs ...[2024-07-13 00:41:32.198251] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.702 [2024-07-13 00:41:32.201291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.702 passed 00:16:20.961 Test: admin_get_log_page_with_lpo ...[2024-07-13 00:41:32.279323] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.961 [2024-07-13 00:41:32.349243] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:20.961 [2024-07-13 00:41:32.362303] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.961 passed 00:16:20.961 Test: fabric_property_get ...[2024-07-13 00:41:32.436431] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.961 [2024-07-13 00:41:32.440459] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:20.961 [2024-07-13 00:41:32.442470] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.961 passed 00:16:21.220 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-13 00:41:32.520996] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.220 [2024-07-13 00:41:32.522232] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:21.220 [2024-07-13 00:41:32.524021] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.220 passed 00:16:21.220 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-13 00:41:32.601927] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.220 [2024-07-13 00:41:32.686234] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:21.220 [2024-07-13 00:41:32.702242] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:21.220 [2024-07-13 00:41:32.707320] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.220 passed 00:16:21.488 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-13 00:41:32.782480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.488 [2024-07-13 00:41:32.783713] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:21.488 [2024-07-13 00:41:32.785500] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.488 passed 00:16:21.488 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-13 00:41:32.863399] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.488 [2024-07-13 00:41:32.939238] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:21.488 [2024-07-13 00:41:32.963233] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:21.488 [2024-07-13 00:41:32.968314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.488 passed 00:16:21.488 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-13 00:41:33.045483] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.745 [2024-07-13 00:41:33.046716] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:21.745 [2024-07-13 00:41:33.046742] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:21.745 [2024-07-13 00:41:33.048505] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.745 passed 00:16:21.745 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-13 00:41:33.126370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.745 [2024-07-13 00:41:33.217257] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:21.745 [2024-07-13 00:41:33.227253] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:21.745 [2024-07-13 00:41:33.235234] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:21.745 [2024-07-13 00:41:33.243230] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:21.745 [2024-07-13 00:41:33.272318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.745 passed 00:16:22.003 Test: admin_create_io_sq_verify_pc ...[2024-07-13 00:41:33.350259] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:22.003 [2024-07-13 00:41:33.368239] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:22.003 [2024-07-13 00:41:33.385471] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:22.003 passed 00:16:22.003 Test: admin_create_io_qp_max_qps ...[2024-07-13 00:41:33.464990] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:23.380 [2024-07-13 00:41:34.573233] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:23.639 [2024-07-13 00:41:34.948017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:23.639 passed 00:16:23.639 Test: admin_create_io_sq_shared_cq ...[2024-07-13 00:41:35.025094] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:23.639 [2024-07-13 00:41:35.156231] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:23.639 [2024-07-13 00:41:35.193288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:23.898 passed 00:16:23.898 00:16:23.898 Run Summary: Type Total Ran Passed Failed Inactive 00:16:23.898 suites 1 1 n/a 0 0 00:16:23.898 tests 18 18 18 0 0 00:16:23.898 asserts 360 360 360 0 n/a 00:16:23.898 00:16:23.898 Elapsed time = 1.510 seconds 00:16:23.898 00:41:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1344802 00:16:23.898 00:41:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1344802 ']' 00:16:23.898 00:41:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1344802 00:16:23.898 00:41:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:16:23.898 00:41:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:23.898 00:41:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1344802 00:16:23.898 00:41:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:23.898 00:41:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:23.898 00:41:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1344802' 00:16:23.898 killing process with pid 1344802 00:16:23.898 00:41:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1344802 00:16:23.898 00:41:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1344802 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:24.157 00:16:24.157 real 0m5.541s 00:16:24.157 user 0m15.673s 00:16:24.157 sys 0m0.447s 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:24.157 ************************************ 00:16:24.157 END TEST nvmf_vfio_user_nvme_compliance 00:16:24.157 ************************************ 00:16:24.157 00:41:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:24.157 00:41:35 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:24.157 00:41:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:24.157 00:41:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.157 00:41:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:24.157 ************************************ 00:16:24.157 START TEST nvmf_vfio_user_fuzz 00:16:24.157 ************************************ 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:24.157 * Looking for test storage... 00:16:24.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.157 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1345779 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1345779' 00:16:24.158 Process pid: 1345779 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1345779 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1345779 ']' 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.158 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:24.417 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.417 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:16:24.417 00:41:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.791 malloc0 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.791 00:41:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.792 00:41:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.792 00:41:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:25.792 00:41:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.792 00:41:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.792 00:41:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.792 00:41:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:25.792 00:41:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:57.853 Fuzzing completed. Shutting down the fuzz application 00:16:57.854 00:16:57.854 Dumping successful admin opcodes: 00:16:57.854 8, 9, 10, 24, 00:16:57.854 Dumping successful io opcodes: 00:16:57.854 0, 00:16:57.854 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1100617, total successful commands: 4330, random_seed: 231828096 00:16:57.854 NS: 0x200003a1ef00 admin qp, Total commands completed: 270440, total successful commands: 2178, random_seed: 2381893760 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1345779 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1345779 ']' 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1345779 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1345779 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1345779' 00:16:57.854 killing process with pid 1345779 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1345779 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1345779 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:57.854 00:16:57.854 real 0m32.157s 00:16:57.854 user 0m34.507s 00:16:57.854 sys 0m26.003s 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:57.854 00:42:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:57.854 ************************************ 00:16:57.854 END TEST nvmf_vfio_user_fuzz 00:16:57.854 ************************************ 00:16:57.854 00:42:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:57.854 00:42:07 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:57.854 00:42:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:57.854 00:42:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.854 00:42:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:57.854 ************************************ 00:16:57.854 START TEST nvmf_host_management 00:16:57.854 ************************************ 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:57.854 * Looking for test storage... 00:16:57.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:57.854 00:42:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:02.045 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:02.045 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:02.045 Found net devices under 0000:86:00.0: cvl_0_0 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:02.045 Found net devices under 0000:86:00.1: cvl_0_1 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.045 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.046 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.046 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:02.046 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:02.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:17:02.306 00:17:02.306 --- 10.0.0.2 ping statistics --- 00:17:02.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.306 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:17:02.306 00:17:02.306 --- 10.0.0.1 ping statistics --- 00:17:02.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.306 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1354578 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1354578 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1354578 ']' 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.306 00:42:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.306 [2024-07-13 00:42:13.732891] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:02.306 [2024-07-13 00:42:13.732936] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.306 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.306 [2024-07-13 00:42:13.805662] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.306 [2024-07-13 00:42:13.848367] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.306 [2024-07-13 00:42:13.848406] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.306 [2024-07-13 00:42:13.848413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.306 [2024-07-13 00:42:13.848419] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.306 [2024-07-13 00:42:13.848424] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.306 [2024-07-13 00:42:13.848553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.306 [2024-07-13 00:42:13.848615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.306 [2024-07-13 00:42:13.848699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.306 [2024-07-13 00:42:13.848700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:02.566 00:42:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.566 00:42:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:17:02.566 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.566 00:42:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:02.566 00:42:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.566 00:42:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.566 00:42:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.566 00:42:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.566 00:42:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.566 [2024-07-13 00:42:13.995362] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.566 00:42:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.566 Malloc0 00:17:02.566 [2024-07-13 00:42:14.055080] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1354630 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1354630 /var/tmp/bdevperf.sock 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1354630 ']' 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:02.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.566 { 00:17:02.566 "params": { 00:17:02.566 "name": "Nvme$subsystem", 00:17:02.566 "trtype": "$TEST_TRANSPORT", 00:17:02.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.566 "adrfam": "ipv4", 00:17:02.566 "trsvcid": "$NVMF_PORT", 00:17:02.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.566 "hdgst": ${hdgst:-false}, 00:17:02.566 "ddgst": ${ddgst:-false} 00:17:02.566 }, 00:17:02.566 "method": "bdev_nvme_attach_controller" 00:17:02.566 } 00:17:02.566 EOF 00:17:02.566 )") 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:02.566 00:42:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:02.566 "params": { 00:17:02.566 "name": "Nvme0", 00:17:02.566 "trtype": "tcp", 00:17:02.566 "traddr": "10.0.0.2", 00:17:02.566 "adrfam": "ipv4", 00:17:02.567 "trsvcid": "4420", 00:17:02.567 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:02.567 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:02.567 "hdgst": false, 00:17:02.567 "ddgst": false 00:17:02.567 }, 00:17:02.567 "method": "bdev_nvme_attach_controller" 00:17:02.567 }' 00:17:02.826 [2024-07-13 00:42:14.144987] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:02.826 [2024-07-13 00:42:14.145034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354630 ] 00:17:02.826 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.826 [2024-07-13 00:42:14.212443] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.826 [2024-07-13 00:42:14.252376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.085 Running I/O for 10 seconds... 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:17:03.085 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:17:03.346 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:17:03.346 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:03.346 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:03.346 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:03.346 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.346 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:03.346 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.346 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:17:03.346 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:17:03.346 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:03.346 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:03.346 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:03.346 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:03.346 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.346 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:03.346 [2024-07-13 00:42:14.809518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1698390 is same with the state(5) to be set 00:17:03.346 [2024-07-13 00:42:14.809688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-13 00:42:14.809724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.809989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.809995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.347 [2024-07-13 00:42:14.810252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-13 00:42:14.810259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.348 [2024-07-13 00:42:14.810682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.348 [2024-07-13 00:42:14.810743] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18507f0 was disconnected and freed. reset controller. 00:17:03.349 [2024-07-13 00:42:14.811661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:03.349 task offset: 102016 on job bdev=Nvme0n1 fails 00:17:03.349 00:17:03.349 Latency(us) 00:17:03.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.349 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:03.349 Job: Nvme0n1 ended in about 0.41 seconds with error 00:17:03.349 Verification LBA range: start 0x0 length 0x400 00:17:03.349 Nvme0n1 : 0.41 1873.94 117.12 156.16 0.00 30681.91 1431.82 27924.03 00:17:03.349 =================================================================================================================== 00:17:03.349 Total : 1873.94 117.12 156.16 0.00 30681.91 1431.82 27924.03 00:17:03.349 [2024-07-13 00:42:14.813297] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:03.349 [2024-07-13 00:42:14.813314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143f2d0 (9): Bad file descriptor 00:17:03.349 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.349 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:03.349 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.349 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:03.349 [2024-07-13 00:42:14.817276] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:17:03.349 [2024-07-13 00:42:14.817413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:03.349 [2024-07-13 00:42:14.817438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.349 [2024-07-13 00:42:14.817455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:17:03.349 [2024-07-13 00:42:14.817462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:17:03.349 [2024-07-13 00:42:14.817470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:17:03.349 [2024-07-13 00:42:14.817476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x143f2d0 00:17:03.349 [2024-07-13 00:42:14.817502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143f2d0 (9): Bad file descriptor 00:17:03.349 [2024-07-13 00:42:14.817520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:03.349 [2024-07-13 00:42:14.817528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:03.349 [2024-07-13 00:42:14.817535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:03.349 [2024-07-13 00:42:14.817548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.349 00:42:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.349 00:42:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:04.286 00:42:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1354630 00:17:04.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1354630) - No such process 00:17:04.286 00:42:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:17:04.286 00:42:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:04.286 00:42:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:04.286 00:42:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:04.286 00:42:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:04.286 00:42:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:04.286 00:42:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:04.286 00:42:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:04.286 { 00:17:04.286 "params": { 00:17:04.286 "name": "Nvme$subsystem", 00:17:04.286 "trtype": "$TEST_TRANSPORT", 00:17:04.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:04.286 "adrfam": "ipv4", 00:17:04.286 "trsvcid": "$NVMF_PORT", 00:17:04.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:04.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:04.286 "hdgst": ${hdgst:-false}, 00:17:04.286 "ddgst": ${ddgst:-false} 00:17:04.286 }, 00:17:04.286 "method": "bdev_nvme_attach_controller" 00:17:04.286 } 00:17:04.286 EOF 00:17:04.286 )") 00:17:04.286 00:42:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:04.286 00:42:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:04.286 00:42:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:04.286 00:42:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:04.286 "params": { 00:17:04.286 "name": "Nvme0", 00:17:04.286 "trtype": "tcp", 00:17:04.286 "traddr": "10.0.0.2", 00:17:04.286 "adrfam": "ipv4", 00:17:04.286 "trsvcid": "4420", 00:17:04.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:04.286 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:04.286 "hdgst": false, 00:17:04.286 "ddgst": false 00:17:04.286 }, 00:17:04.286 "method": "bdev_nvme_attach_controller" 00:17:04.286 }' 00:17:04.547 [2024-07-13 00:42:15.873282] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:04.547 [2024-07-13 00:42:15.873329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354878 ] 00:17:04.547 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.547 [2024-07-13 00:42:15.939279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.547 [2024-07-13 00:42:15.977286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.806 Running I/O for 1 seconds... 00:17:05.742 00:17:05.742 Latency(us) 00:17:05.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.742 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:05.742 Verification LBA range: start 0x0 length 0x400 00:17:05.742 Nvme0n1 : 1.01 1970.28 123.14 0.00 0.00 31971.84 5898.24 27354.16 00:17:05.742 =================================================================================================================== 00:17:05.742 Total : 1970.28 123.14 0.00 0.00 31971.84 5898.24 27354.16 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:06.001 rmmod nvme_tcp 00:17:06.001 rmmod nvme_fabrics 00:17:06.001 rmmod nvme_keyring 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1354578 ']' 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1354578 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1354578 ']' 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1354578 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1354578 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1354578' 00:17:06.001 killing process with pid 1354578 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1354578 00:17:06.001 00:42:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1354578 00:17:06.260 [2024-07-13 00:42:17.638118] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:06.260 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:06.260 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:06.260 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:06.260 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.260 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.260 00:42:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.260 00:42:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.260 00:42:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.164 00:42:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:08.424 00:42:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:08.424 00:17:08.424 real 0m11.956s 00:17:08.424 user 0m18.728s 00:17:08.424 sys 0m5.459s 00:17:08.424 00:42:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:08.424 00:42:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:08.424 ************************************ 00:17:08.424 END TEST nvmf_host_management 00:17:08.424 ************************************ 00:17:08.424 00:42:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:08.424 00:42:19 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:08.424 00:42:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:08.424 00:42:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.424 00:42:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:08.424 ************************************ 00:17:08.424 START TEST nvmf_lvol 00:17:08.424 ************************************ 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:08.424 * Looking for test storage... 00:17:08.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.424 00:42:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:14.990 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:14.990 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:14.990 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:14.991 Found net devices under 0000:86:00.0: cvl_0_0 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:14.991 Found net devices under 0000:86:00.1: cvl_0_1 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:14.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:17:14.991 00:17:14.991 --- 10.0.0.2 ping statistics --- 00:17:14.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.991 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:17:14.991 00:17:14.991 --- 10.0.0.1 ping statistics --- 00:17:14.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.991 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1358628 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1358628 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1358628 ']' 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:14.991 [2024-07-13 00:42:25.732846] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:14.991 [2024-07-13 00:42:25.732888] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.991 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.991 [2024-07-13 00:42:25.805017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:14.991 [2024-07-13 00:42:25.845819] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.991 [2024-07-13 00:42:25.845857] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.991 [2024-07-13 00:42:25.845863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.991 [2024-07-13 00:42:25.845869] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.991 [2024-07-13 00:42:25.845875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.991 [2024-07-13 00:42:25.845933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.991 [2024-07-13 00:42:25.846064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.991 [2024-07-13 00:42:25.846064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.991 00:42:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:14.991 [2024-07-13 00:42:26.124188] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.991 00:42:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:14.991 00:42:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:14.991 00:42:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:15.248 00:42:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:15.248 00:42:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:15.248 00:42:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:15.505 00:42:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=dae3e74e-f7d3-4864-813d-885f927741d0 00:17:15.505 00:42:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dae3e74e-f7d3-4864-813d-885f927741d0 lvol 20 00:17:15.763 00:42:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ed585edf-4649-43de-9b9a-a3d50bfb28ee 00:17:15.763 00:42:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:15.763 00:42:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ed585edf-4649-43de-9b9a-a3d50bfb28ee 00:17:16.021 00:42:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:16.279 [2024-07-13 00:42:27.631310] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.279 00:42:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:16.536 00:42:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1359098 00:17:16.536 00:42:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:16.536 00:42:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:16.536 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.469 00:42:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ed585edf-4649-43de-9b9a-a3d50bfb28ee MY_SNAPSHOT 00:17:17.728 00:42:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=63622082-22a6-475a-96ca-27c4e3f8d0e8 00:17:17.728 00:42:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ed585edf-4649-43de-9b9a-a3d50bfb28ee 30 00:17:17.987 00:42:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 63622082-22a6-475a-96ca-27c4e3f8d0e8 MY_CLONE 00:17:17.987 00:42:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=949d5de5-cb6f-4fe4-bbb7-a6ac5947f476 00:17:17.987 00:42:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 949d5de5-cb6f-4fe4-bbb7-a6ac5947f476 00:17:18.556 00:42:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1359098 00:17:28.535 Initializing NVMe Controllers 00:17:28.535 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:28.535 Controller IO queue size 128, less than required. 00:17:28.535 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:28.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:28.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:28.535 Initialization complete. Launching workers. 00:17:28.535 ======================================================== 00:17:28.535 Latency(us) 00:17:28.535 Device Information : IOPS MiB/s Average min max 00:17:28.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12352.00 48.25 10367.81 583.60 62468.83 00:17:28.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12171.50 47.54 10515.98 3021.10 59762.40 00:17:28.535 ======================================================== 00:17:28.535 Total : 24523.50 95.79 10441.35 583.60 62468.83 00:17:28.535 00:17:28.535 00:42:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ed585edf-4649-43de-9b9a-a3d50bfb28ee 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dae3e74e-f7d3-4864-813d-885f927741d0 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:28.536 rmmod nvme_tcp 00:17:28.536 rmmod nvme_fabrics 00:17:28.536 rmmod nvme_keyring 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1358628 ']' 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1358628 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1358628 ']' 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1358628 00:17:28.536 00:42:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:17:28.536 00:42:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:28.536 00:42:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1358628 00:17:28.536 00:42:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:28.536 00:42:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:28.536 00:42:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1358628' 00:17:28.536 killing process with pid 1358628 00:17:28.536 00:42:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1358628 00:17:28.536 00:42:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1358628 00:17:28.536 00:42:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:28.536 00:42:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:28.536 00:42:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:28.536 00:42:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.536 00:42:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:28.536 00:42:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.536 00:42:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.536 00:42:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.916 00:42:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:29.916 00:17:29.916 real 0m21.528s 00:17:29.916 user 1m2.732s 00:17:29.916 sys 0m7.158s 00:17:29.916 00:42:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:29.916 00:42:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:29.916 ************************************ 00:17:29.916 END TEST nvmf_lvol 00:17:29.916 ************************************ 00:17:29.916 00:42:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:29.916 00:42:41 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:29.916 00:42:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:29.916 00:42:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:29.916 00:42:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:29.916 ************************************ 00:17:29.916 START TEST nvmf_lvs_grow 00:17:29.916 ************************************ 00:17:29.916 00:42:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:30.177 * Looking for test storage... 00:17:30.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.177 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:30.178 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:30.178 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:30.178 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.178 00:42:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.178 00:42:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.178 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:30.178 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:30.178 00:42:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:30.178 00:42:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:36.750 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:36.750 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:36.750 Found net devices under 0000:86:00.0: cvl_0_0 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:36.750 Found net devices under 0000:86:00.1: cvl_0_1 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:36.750 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:36.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:17:36.751 00:17:36.751 --- 10.0.0.2 ping statistics --- 00:17:36.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.751 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:17:36.751 00:17:36.751 --- 10.0.0.1 ping statistics --- 00:17:36.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.751 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1364258 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1364258 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1364258 ']' 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:36.751 [2024-07-13 00:42:47.368816] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:36.751 [2024-07-13 00:42:47.368857] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.751 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.751 [2024-07-13 00:42:47.440851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.751 [2024-07-13 00:42:47.480790] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.751 [2024-07-13 00:42:47.480829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.751 [2024-07-13 00:42:47.480836] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.751 [2024-07-13 00:42:47.480843] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.751 [2024-07-13 00:42:47.480848] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.751 [2024-07-13 00:42:47.480865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:36.751 [2024-07-13 00:42:47.773696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:36.751 ************************************ 00:17:36.751 START TEST lvs_grow_clean 00:17:36.751 ************************************ 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:36.751 00:42:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:36.751 00:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:36.751 00:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:36.751 00:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f9cd4143-e6cd-4be5-9e0c-c14d4bebbea6 00:17:36.751 00:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9cd4143-e6cd-4be5-9e0c-c14d4bebbea6 00:17:36.751 00:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:37.013 00:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:37.013 00:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:37.013 00:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f9cd4143-e6cd-4be5-9e0c-c14d4bebbea6 lvol 150 00:17:37.013 00:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=60bc2bf3-6f07-49fa-bfed-5b1d928c260f 00:17:37.013 00:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:37.013 00:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:37.315 [2024-07-13 00:42:48.689902] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:37.315 [2024-07-13 00:42:48.689955] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:37.315 true 00:17:37.315 00:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9cd4143-e6cd-4be5-9e0c-c14d4bebbea6 00:17:37.315 00:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:37.315 00:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:37.315 00:42:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:37.574 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 60bc2bf3-6f07-49fa-bfed-5b1d928c260f 00:17:37.833 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:37.833 [2024-07-13 00:42:49.351896] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.833 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:38.092 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:38.092 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1364752 00:17:38.092 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:38.092 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1364752 /var/tmp/bdevperf.sock 00:17:38.092 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1364752 ']' 00:17:38.092 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.092 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.092 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.092 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.092 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:38.092 [2024-07-13 00:42:49.573816] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:38.092 [2024-07-13 00:42:49.573862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1364752 ] 00:17:38.092 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.092 [2024-07-13 00:42:49.638235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.351 [2024-07-13 00:42:49.678802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.352 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.352 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:38.352 00:42:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:38.611 Nvme0n1 00:17:38.611 00:42:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:38.870 [ 00:17:38.870 { 00:17:38.870 "name": "Nvme0n1", 00:17:38.870 "aliases": [ 00:17:38.870 "60bc2bf3-6f07-49fa-bfed-5b1d928c260f" 00:17:38.870 ], 00:17:38.870 "product_name": "NVMe disk", 00:17:38.870 "block_size": 4096, 00:17:38.870 "num_blocks": 38912, 00:17:38.870 "uuid": "60bc2bf3-6f07-49fa-bfed-5b1d928c260f", 00:17:38.870 "assigned_rate_limits": { 00:17:38.870 "rw_ios_per_sec": 0, 00:17:38.870 "rw_mbytes_per_sec": 0, 00:17:38.870 "r_mbytes_per_sec": 0, 00:17:38.870 "w_mbytes_per_sec": 0 00:17:38.870 }, 00:17:38.870 "claimed": false, 00:17:38.870 "zoned": false, 00:17:38.870 "supported_io_types": { 00:17:38.870 "read": true, 00:17:38.870 "write": true, 00:17:38.870 "unmap": true, 00:17:38.870 "flush": true, 00:17:38.870 "reset": true, 00:17:38.870 "nvme_admin": true, 00:17:38.870 "nvme_io": true, 00:17:38.870 "nvme_io_md": false, 00:17:38.870 "write_zeroes": true, 00:17:38.870 "zcopy": false, 00:17:38.870 "get_zone_info": false, 00:17:38.870 "zone_management": false, 00:17:38.870 "zone_append": false, 00:17:38.870 "compare": true, 00:17:38.870 "compare_and_write": true, 00:17:38.870 "abort": true, 00:17:38.870 "seek_hole": false, 00:17:38.870 "seek_data": false, 00:17:38.870 "copy": true, 00:17:38.870 "nvme_iov_md": false 00:17:38.870 }, 00:17:38.870 "memory_domains": [ 00:17:38.870 { 00:17:38.870 "dma_device_id": "system", 00:17:38.870 "dma_device_type": 1 00:17:38.870 } 00:17:38.870 ], 00:17:38.870 "driver_specific": { 00:17:38.870 "nvme": [ 00:17:38.870 { 00:17:38.870 "trid": { 00:17:38.870 "trtype": "TCP", 00:17:38.870 "adrfam": "IPv4", 00:17:38.870 "traddr": "10.0.0.2", 00:17:38.870 "trsvcid": "4420", 00:17:38.870 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:38.870 }, 00:17:38.870 "ctrlr_data": { 00:17:38.870 "cntlid": 1, 00:17:38.870 "vendor_id": "0x8086", 00:17:38.870 "model_number": "SPDK bdev Controller", 00:17:38.870 "serial_number": "SPDK0", 00:17:38.870 "firmware_revision": "24.09", 00:17:38.870 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:38.870 "oacs": { 00:17:38.870 "security": 0, 00:17:38.870 "format": 0, 00:17:38.870 "firmware": 0, 00:17:38.870 "ns_manage": 0 00:17:38.870 }, 00:17:38.870 "multi_ctrlr": true, 00:17:38.870 "ana_reporting": false 00:17:38.870 }, 00:17:38.870 "vs": { 00:17:38.870 "nvme_version": "1.3" 00:17:38.870 }, 00:17:38.870 "ns_data": { 00:17:38.870 "id": 1, 00:17:38.870 "can_share": true 00:17:38.870 } 00:17:38.870 } 00:17:38.870 ], 00:17:38.870 "mp_policy": "active_passive" 00:17:38.870 } 00:17:38.870 } 00:17:38.870 ] 00:17:38.870 00:42:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1364853 00:17:38.870 00:42:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:38.870 00:42:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:38.870 Running I/O for 10 seconds... 00:17:40.248 Latency(us) 00:17:40.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.248 Nvme0n1 : 1.00 23242.00 90.79 0.00 0.00 0.00 0.00 0.00 00:17:40.248 =================================================================================================================== 00:17:40.248 Total : 23242.00 90.79 0.00 0.00 0.00 0.00 0.00 00:17:40.248 00:17:40.816 00:42:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f9cd4143-e6cd-4be5-9e0c-c14d4bebbea6 00:17:41.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.075 Nvme0n1 : 2.00 23376.50 91.31 0.00 0.00 0.00 0.00 0.00 00:17:41.075 =================================================================================================================== 00:17:41.075 Total : 23376.50 91.31 0.00 0.00 0.00 0.00 0.00 00:17:41.075 00:17:41.075 true 00:17:41.075 00:42:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9cd4143-e6cd-4be5-9e0c-c14d4bebbea6 00:17:41.075 00:42:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:41.334 00:42:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:41.334 00:42:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:41.334 00:42:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1364853 00:17:41.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.900 Nvme0n1 : 3.00 23429.67 91.52 0.00 0.00 0.00 0.00 0.00 00:17:41.900 =================================================================================================================== 00:17:41.900 Total : 23429.67 91.52 0.00 0.00 0.00 0.00 0.00 00:17:41.900 00:17:43.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.279 Nvme0n1 : 4.00 23488.00 91.75 0.00 0.00 0.00 0.00 0.00 00:17:43.279 =================================================================================================================== 00:17:43.279 Total : 23488.00 91.75 0.00 0.00 0.00 0.00 0.00 00:17:43.279 00:17:43.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.847 Nvme0n1 : 5.00 23540.40 91.95 0.00 0.00 0.00 0.00 0.00 00:17:43.847 =================================================================================================================== 00:17:43.847 Total : 23540.40 91.95 0.00 0.00 0.00 0.00 0.00 00:17:43.847 00:17:45.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.225 Nvme0n1 : 6.00 23564.67 92.05 0.00 0.00 0.00 0.00 0.00 00:17:45.225 =================================================================================================================== 00:17:45.225 Total : 23564.67 92.05 0.00 0.00 0.00 0.00 0.00 00:17:45.225 00:17:46.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.161 Nvme0n1 : 7.00 23596.43 92.17 0.00 0.00 0.00 0.00 0.00 00:17:46.161 =================================================================================================================== 00:17:46.161 Total : 23596.43 92.17 0.00 0.00 0.00 0.00 0.00 00:17:46.161 00:17:47.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.097 Nvme0n1 : 8.00 23616.62 92.25 0.00 0.00 0.00 0.00 0.00 00:17:47.097 =================================================================================================================== 00:17:47.097 Total : 23616.62 92.25 0.00 0.00 0.00 0.00 0.00 00:17:47.097 00:17:48.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.034 Nvme0n1 : 9.00 23637.11 92.33 0.00 0.00 0.00 0.00 0.00 00:17:48.034 =================================================================================================================== 00:17:48.034 Total : 23637.11 92.33 0.00 0.00 0.00 0.00 0.00 00:17:48.034 00:17:48.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.972 Nvme0n1 : 10.00 23629.50 92.30 0.00 0.00 0.00 0.00 0.00 00:17:48.972 =================================================================================================================== 00:17:48.972 Total : 23629.50 92.30 0.00 0.00 0.00 0.00 0.00 00:17:48.972 00:17:48.972 00:17:48.972 Latency(us) 00:17:48.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.972 Nvme0n1 : 10.01 23630.08 92.31 0.00 0.00 5414.16 3234.06 13335.15 00:17:48.972 =================================================================================================================== 00:17:48.972 Total : 23630.08 92.31 0.00 0.00 5414.16 3234.06 13335.15 00:17:48.972 0 00:17:48.972 00:43:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1364752 00:17:48.972 00:43:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1364752 ']' 00:17:48.972 00:43:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1364752 00:17:48.972 00:43:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:48.972 00:43:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.972 00:43:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1364752 00:17:48.972 00:43:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:48.972 00:43:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:48.972 00:43:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1364752' 00:17:48.972 killing process with pid 1364752 00:17:48.972 00:43:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1364752 00:17:48.972 Received shutdown signal, test time was about 10.000000 seconds 00:17:48.972 00:17:48.972 Latency(us) 00:17:48.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.972 =================================================================================================================== 00:17:48.972 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.972 00:43:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1364752 00:17:49.231 00:43:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:49.489 00:43:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:49.489 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9cd4143-e6cd-4be5-9e0c-c14d4bebbea6 00:17:49.489 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:49.747 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:49.747 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:49.747 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:50.005 [2024-07-13 00:43:01.365797] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:50.005 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9cd4143-e6cd-4be5-9e0c-c14d4bebbea6 00:17:50.005 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:50.005 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9cd4143-e6cd-4be5-9e0c-c14d4bebbea6 00:17:50.005 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:50.005 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:50.005 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:50.005 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:50.005 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:50.005 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:50.005 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:50.005 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:50.005 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9cd4143-e6cd-4be5-9e0c-c14d4bebbea6 00:17:50.263 request: 00:17:50.263 { 00:17:50.263 "uuid": "f9cd4143-e6cd-4be5-9e0c-c14d4bebbea6", 00:17:50.263 "method": "bdev_lvol_get_lvstores", 00:17:50.263 "req_id": 1 00:17:50.263 } 00:17:50.263 Got JSON-RPC error response 00:17:50.263 response: 00:17:50.263 { 00:17:50.263 "code": -19, 00:17:50.263 "message": "No such device" 00:17:50.263 } 00:17:50.263 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:50.263 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:50.263 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:50.263 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:50.263 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:50.263 aio_bdev 00:17:50.263 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 60bc2bf3-6f07-49fa-bfed-5b1d928c260f 00:17:50.263 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=60bc2bf3-6f07-49fa-bfed-5b1d928c260f 00:17:50.263 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:50.263 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:50.264 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:50.264 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:50.264 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:50.521 00:43:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 60bc2bf3-6f07-49fa-bfed-5b1d928c260f -t 2000 00:17:50.779 [ 00:17:50.779 { 00:17:50.779 "name": "60bc2bf3-6f07-49fa-bfed-5b1d928c260f", 00:17:50.779 "aliases": [ 00:17:50.779 "lvs/lvol" 00:17:50.779 ], 00:17:50.779 "product_name": "Logical Volume", 00:17:50.779 "block_size": 4096, 00:17:50.779 "num_blocks": 38912, 00:17:50.779 "uuid": "60bc2bf3-6f07-49fa-bfed-5b1d928c260f", 00:17:50.779 "assigned_rate_limits": { 00:17:50.779 "rw_ios_per_sec": 0, 00:17:50.779 "rw_mbytes_per_sec": 0, 00:17:50.779 "r_mbytes_per_sec": 0, 00:17:50.779 "w_mbytes_per_sec": 0 00:17:50.779 }, 00:17:50.779 "claimed": false, 00:17:50.779 "zoned": false, 00:17:50.779 "supported_io_types": { 00:17:50.779 "read": true, 00:17:50.779 "write": true, 00:17:50.779 "unmap": true, 00:17:50.779 "flush": false, 00:17:50.779 "reset": true, 00:17:50.779 "nvme_admin": false, 00:17:50.779 "nvme_io": false, 00:17:50.779 "nvme_io_md": false, 00:17:50.779 "write_zeroes": true, 00:17:50.779 "zcopy": false, 00:17:50.779 "get_zone_info": false, 00:17:50.779 "zone_management": false, 00:17:50.779 "zone_append": false, 00:17:50.779 "compare": false, 00:17:50.779 "compare_and_write": false, 00:17:50.779 "abort": false, 00:17:50.779 "seek_hole": true, 00:17:50.779 "seek_data": true, 00:17:50.779 "copy": false, 00:17:50.779 "nvme_iov_md": false 00:17:50.779 }, 00:17:50.779 "driver_specific": { 00:17:50.779 "lvol": { 00:17:50.779 "lvol_store_uuid": "f9cd4143-e6cd-4be5-9e0c-c14d4bebbea6", 00:17:50.779 "base_bdev": "aio_bdev", 00:17:50.779 "thin_provision": false, 00:17:50.779 "num_allocated_clusters": 38, 00:17:50.779 "snapshot": false, 00:17:50.779 "clone": false, 00:17:50.779 "esnap_clone": false 00:17:50.779 } 00:17:50.779 } 00:17:50.779 } 00:17:50.779 ] 00:17:50.779 00:43:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:50.779 00:43:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9cd4143-e6cd-4be5-9e0c-c14d4bebbea6 00:17:50.779 00:43:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:50.779 00:43:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:50.779 00:43:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9cd4143-e6cd-4be5-9e0c-c14d4bebbea6 00:17:50.779 00:43:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:51.037 00:43:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:51.037 00:43:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 60bc2bf3-6f07-49fa-bfed-5b1d928c260f 00:17:51.295 00:43:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9cd4143-e6cd-4be5-9e0c-c14d4bebbea6 00:17:51.295 00:43:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:51.553 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:51.553 00:17:51.553 real 0m15.228s 00:17:51.553 user 0m14.769s 00:17:51.553 sys 0m1.432s 00:17:51.553 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.553 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:51.553 ************************************ 00:17:51.553 END TEST lvs_grow_clean 00:17:51.553 ************************************ 00:17:51.553 00:43:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:51.553 00:43:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:51.553 00:43:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:51.553 00:43:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.553 00:43:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:51.812 ************************************ 00:17:51.812 START TEST lvs_grow_dirty 00:17:51.812 ************************************ 00:17:51.812 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:51.812 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:51.812 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:51.812 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:51.812 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:51.812 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:51.812 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:51.812 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:51.812 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:51.812 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:51.812 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:51.812 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:52.070 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9bb19df1-6fc7-411d-b9ab-2822a5694716 00:17:52.070 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bb19df1-6fc7-411d-b9ab-2822a5694716 00:17:52.070 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:52.328 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:52.328 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:52.328 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9bb19df1-6fc7-411d-b9ab-2822a5694716 lvol 150 00:17:52.328 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8b56827f-88ae-442b-9668-d9d86e9092e8 00:17:52.328 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:52.328 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:52.588 [2024-07-13 00:43:03.978868] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:52.588 [2024-07-13 00:43:03.978915] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:52.588 true 00:17:52.588 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:52.588 00:43:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bb19df1-6fc7-411d-b9ab-2822a5694716 00:17:52.848 00:43:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:52.848 00:43:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:52.848 00:43:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8b56827f-88ae-442b-9668-d9d86e9092e8 00:17:53.108 00:43:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:53.368 [2024-07-13 00:43:04.672948] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.368 00:43:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:53.368 00:43:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1367338 00:17:53.368 00:43:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:53.368 00:43:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:53.368 00:43:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1367338 /var/tmp/bdevperf.sock 00:17:53.368 00:43:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1367338 ']' 00:17:53.368 00:43:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.368 00:43:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.368 00:43:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.368 00:43:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.368 00:43:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:53.368 [2024-07-13 00:43:04.903346] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:53.368 [2024-07-13 00:43:04.903394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1367338 ] 00:17:53.368 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.628 [2024-07-13 00:43:04.971502] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.628 [2024-07-13 00:43:05.011136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.628 00:43:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.628 00:43:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:53.628 00:43:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:53.887 Nvme0n1 00:17:53.887 00:43:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:54.146 [ 00:17:54.146 { 00:17:54.146 "name": "Nvme0n1", 00:17:54.146 "aliases": [ 00:17:54.146 "8b56827f-88ae-442b-9668-d9d86e9092e8" 00:17:54.146 ], 00:17:54.146 "product_name": "NVMe disk", 00:17:54.146 "block_size": 4096, 00:17:54.146 "num_blocks": 38912, 00:17:54.146 "uuid": "8b56827f-88ae-442b-9668-d9d86e9092e8", 00:17:54.146 "assigned_rate_limits": { 00:17:54.146 "rw_ios_per_sec": 0, 00:17:54.146 "rw_mbytes_per_sec": 0, 00:17:54.146 "r_mbytes_per_sec": 0, 00:17:54.146 "w_mbytes_per_sec": 0 00:17:54.146 }, 00:17:54.146 "claimed": false, 00:17:54.146 "zoned": false, 00:17:54.146 "supported_io_types": { 00:17:54.146 "read": true, 00:17:54.146 "write": true, 00:17:54.146 "unmap": true, 00:17:54.146 "flush": true, 00:17:54.146 "reset": true, 00:17:54.146 "nvme_admin": true, 00:17:54.146 "nvme_io": true, 00:17:54.146 "nvme_io_md": false, 00:17:54.146 "write_zeroes": true, 00:17:54.146 "zcopy": false, 00:17:54.146 "get_zone_info": false, 00:17:54.146 "zone_management": false, 00:17:54.146 "zone_append": false, 00:17:54.146 "compare": true, 00:17:54.146 "compare_and_write": true, 00:17:54.146 "abort": true, 00:17:54.146 "seek_hole": false, 00:17:54.147 "seek_data": false, 00:17:54.147 "copy": true, 00:17:54.147 "nvme_iov_md": false 00:17:54.147 }, 00:17:54.147 "memory_domains": [ 00:17:54.147 { 00:17:54.147 "dma_device_id": "system", 00:17:54.147 "dma_device_type": 1 00:17:54.147 } 00:17:54.147 ], 00:17:54.147 "driver_specific": { 00:17:54.147 "nvme": [ 00:17:54.147 { 00:17:54.147 "trid": { 00:17:54.147 "trtype": "TCP", 00:17:54.147 "adrfam": "IPv4", 00:17:54.147 "traddr": "10.0.0.2", 00:17:54.147 "trsvcid": "4420", 00:17:54.147 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:54.147 }, 00:17:54.147 "ctrlr_data": { 00:17:54.147 "cntlid": 1, 00:17:54.147 "vendor_id": "0x8086", 00:17:54.147 "model_number": "SPDK bdev Controller", 00:17:54.147 "serial_number": "SPDK0", 00:17:54.147 "firmware_revision": "24.09", 00:17:54.147 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:54.147 "oacs": { 00:17:54.147 "security": 0, 00:17:54.147 "format": 0, 00:17:54.147 "firmware": 0, 00:17:54.147 "ns_manage": 0 00:17:54.147 }, 00:17:54.147 "multi_ctrlr": true, 00:17:54.147 "ana_reporting": false 00:17:54.147 }, 00:17:54.147 "vs": { 00:17:54.147 "nvme_version": "1.3" 00:17:54.147 }, 00:17:54.147 "ns_data": { 00:17:54.147 "id": 1, 00:17:54.147 "can_share": true 00:17:54.147 } 00:17:54.147 } 00:17:54.147 ], 00:17:54.147 "mp_policy": "active_passive" 00:17:54.147 } 00:17:54.147 } 00:17:54.147 ] 00:17:54.147 00:43:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:54.147 00:43:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1367350 00:17:54.147 00:43:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:54.147 Running I/O for 10 seconds... 00:17:55.084 Latency(us) 00:17:55.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:55.084 Nvme0n1 : 1.00 23252.00 90.83 0.00 0.00 0.00 0.00 0.00 00:17:55.084 =================================================================================================================== 00:17:55.084 Total : 23252.00 90.83 0.00 0.00 0.00 0.00 0.00 00:17:55.084 00:17:56.056 00:43:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9bb19df1-6fc7-411d-b9ab-2822a5694716 00:17:56.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.056 Nvme0n1 : 2.00 23342.00 91.18 0.00 0.00 0.00 0.00 0.00 00:17:56.056 =================================================================================================================== 00:17:56.056 Total : 23342.00 91.18 0.00 0.00 0.00 0.00 0.00 00:17:56.056 00:17:56.315 true 00:17:56.315 00:43:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bb19df1-6fc7-411d-b9ab-2822a5694716 00:17:56.315 00:43:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:56.574 00:43:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:56.574 00:43:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:56.574 00:43:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1367350 00:17:57.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.143 Nvme0n1 : 3.00 23416.00 91.47 0.00 0.00 0.00 0.00 0.00 00:17:57.143 =================================================================================================================== 00:17:57.143 Total : 23416.00 91.47 0.00 0.00 0.00 0.00 0.00 00:17:57.143 00:17:58.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.078 Nvme0n1 : 4.00 23503.75 91.81 0.00 0.00 0.00 0.00 0.00 00:17:58.078 =================================================================================================================== 00:17:58.078 Total : 23503.75 91.81 0.00 0.00 0.00 0.00 0.00 00:17:58.078 00:17:59.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.455 Nvme0n1 : 5.00 23547.40 91.98 0.00 0.00 0.00 0.00 0.00 00:17:59.455 =================================================================================================================== 00:17:59.455 Total : 23547.40 91.98 0.00 0.00 0.00 0.00 0.00 00:17:59.455 00:18:00.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.394 Nvme0n1 : 6.00 23586.17 92.13 0.00 0.00 0.00 0.00 0.00 00:18:00.394 =================================================================================================================== 00:18:00.394 Total : 23586.17 92.13 0.00 0.00 0.00 0.00 0.00 00:18:00.394 00:18:01.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:01.331 Nvme0n1 : 7.00 23618.71 92.26 0.00 0.00 0.00 0.00 0.00 00:18:01.331 =================================================================================================================== 00:18:01.331 Total : 23618.71 92.26 0.00 0.00 0.00 0.00 0.00 00:18:01.331 00:18:02.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:02.264 Nvme0n1 : 8.00 23643.88 92.36 0.00 0.00 0.00 0.00 0.00 00:18:02.264 =================================================================================================================== 00:18:02.264 Total : 23643.88 92.36 0.00 0.00 0.00 0.00 0.00 00:18:02.264 00:18:03.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.200 Nvme0n1 : 9.00 23662.78 92.43 0.00 0.00 0.00 0.00 0.00 00:18:03.200 =================================================================================================================== 00:18:03.200 Total : 23662.78 92.43 0.00 0.00 0.00 0.00 0.00 00:18:03.200 00:18:04.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.145 Nvme0n1 : 10.00 23679.90 92.50 0.00 0.00 0.00 0.00 0.00 00:18:04.145 =================================================================================================================== 00:18:04.145 Total : 23679.90 92.50 0.00 0.00 0.00 0.00 0.00 00:18:04.145 00:18:04.145 00:18:04.145 Latency(us) 00:18:04.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.145 Nvme0n1 : 10.00 23680.48 92.50 0.00 0.00 5401.93 2051.56 13563.10 00:18:04.145 =================================================================================================================== 00:18:04.145 Total : 23680.48 92.50 0.00 0.00 5401.93 2051.56 13563.10 00:18:04.145 0 00:18:04.145 00:43:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1367338 00:18:04.145 00:43:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1367338 ']' 00:18:04.145 00:43:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1367338 00:18:04.145 00:43:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:18:04.145 00:43:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.145 00:43:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1367338 00:18:04.145 00:43:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:04.145 00:43:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:04.145 00:43:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1367338' 00:18:04.145 killing process with pid 1367338 00:18:04.145 00:43:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1367338 00:18:04.145 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.145 00:18:04.145 Latency(us) 00:18:04.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.145 =================================================================================================================== 00:18:04.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.145 00:43:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1367338 00:18:04.405 00:43:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:04.664 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:04.664 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bb19df1-6fc7-411d-b9ab-2822a5694716 00:18:04.664 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1364258 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1364258 00:18:04.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1364258 Killed "${NVMF_APP[@]}" "$@" 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1369193 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1369193 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1369193 ']' 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.923 00:43:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:05.181 [2024-07-13 00:43:16.491348] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:05.181 [2024-07-13 00:43:16.491395] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.181 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.181 [2024-07-13 00:43:16.562145] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.181 [2024-07-13 00:43:16.601564] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.181 [2024-07-13 00:43:16.601603] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.181 [2024-07-13 00:43:16.601610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.181 [2024-07-13 00:43:16.601616] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.181 [2024-07-13 00:43:16.601625] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.181 [2024-07-13 00:43:16.601647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.748 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.748 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:18:05.748 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.748 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:05.748 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:06.007 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.007 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:06.007 [2024-07-13 00:43:17.469032] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:06.007 [2024-07-13 00:43:17.469129] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:06.007 [2024-07-13 00:43:17.469154] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:06.007 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:06.007 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8b56827f-88ae-442b-9668-d9d86e9092e8 00:18:06.007 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8b56827f-88ae-442b-9668-d9d86e9092e8 00:18:06.007 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:06.008 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:06.008 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:06.008 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:06.008 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:06.266 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8b56827f-88ae-442b-9668-d9d86e9092e8 -t 2000 00:18:06.266 [ 00:18:06.266 { 00:18:06.266 "name": "8b56827f-88ae-442b-9668-d9d86e9092e8", 00:18:06.266 "aliases": [ 00:18:06.266 "lvs/lvol" 00:18:06.266 ], 00:18:06.266 "product_name": "Logical Volume", 00:18:06.266 "block_size": 4096, 00:18:06.266 "num_blocks": 38912, 00:18:06.266 "uuid": "8b56827f-88ae-442b-9668-d9d86e9092e8", 00:18:06.266 "assigned_rate_limits": { 00:18:06.266 "rw_ios_per_sec": 0, 00:18:06.266 "rw_mbytes_per_sec": 0, 00:18:06.266 "r_mbytes_per_sec": 0, 00:18:06.266 "w_mbytes_per_sec": 0 00:18:06.266 }, 00:18:06.266 "claimed": false, 00:18:06.266 "zoned": false, 00:18:06.266 "supported_io_types": { 00:18:06.266 "read": true, 00:18:06.266 "write": true, 00:18:06.266 "unmap": true, 00:18:06.266 "flush": false, 00:18:06.266 "reset": true, 00:18:06.266 "nvme_admin": false, 00:18:06.266 "nvme_io": false, 00:18:06.266 "nvme_io_md": false, 00:18:06.266 "write_zeroes": true, 00:18:06.266 "zcopy": false, 00:18:06.266 "get_zone_info": false, 00:18:06.266 "zone_management": false, 00:18:06.266 "zone_append": false, 00:18:06.266 "compare": false, 00:18:06.266 "compare_and_write": false, 00:18:06.266 "abort": false, 00:18:06.266 "seek_hole": true, 00:18:06.266 "seek_data": true, 00:18:06.266 "copy": false, 00:18:06.266 "nvme_iov_md": false 00:18:06.266 }, 00:18:06.266 "driver_specific": { 00:18:06.266 "lvol": { 00:18:06.266 "lvol_store_uuid": "9bb19df1-6fc7-411d-b9ab-2822a5694716", 00:18:06.266 "base_bdev": "aio_bdev", 00:18:06.266 "thin_provision": false, 00:18:06.266 "num_allocated_clusters": 38, 00:18:06.266 "snapshot": false, 00:18:06.266 "clone": false, 00:18:06.266 "esnap_clone": false 00:18:06.266 } 00:18:06.266 } 00:18:06.266 } 00:18:06.266 ] 00:18:06.524 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:06.524 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bb19df1-6fc7-411d-b9ab-2822a5694716 00:18:06.524 00:43:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:06.524 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:06.524 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bb19df1-6fc7-411d-b9ab-2822a5694716 00:18:06.524 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:06.782 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:06.782 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:06.782 [2024-07-13 00:43:18.337638] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bb19df1-6fc7-411d-b9ab-2822a5694716 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bb19df1-6fc7-411d-b9ab-2822a5694716 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bb19df1-6fc7-411d-b9ab-2822a5694716 00:18:07.040 request: 00:18:07.040 { 00:18:07.040 "uuid": "9bb19df1-6fc7-411d-b9ab-2822a5694716", 00:18:07.040 "method": "bdev_lvol_get_lvstores", 00:18:07.040 "req_id": 1 00:18:07.040 } 00:18:07.040 Got JSON-RPC error response 00:18:07.040 response: 00:18:07.040 { 00:18:07.040 "code": -19, 00:18:07.040 "message": "No such device" 00:18:07.040 } 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:07.040 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:07.297 aio_bdev 00:18:07.297 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8b56827f-88ae-442b-9668-d9d86e9092e8 00:18:07.297 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=8b56827f-88ae-442b-9668-d9d86e9092e8 00:18:07.297 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:07.297 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:07.297 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:07.297 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:07.297 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:07.555 00:43:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8b56827f-88ae-442b-9668-d9d86e9092e8 -t 2000 00:18:07.555 [ 00:18:07.555 { 00:18:07.555 "name": "8b56827f-88ae-442b-9668-d9d86e9092e8", 00:18:07.555 "aliases": [ 00:18:07.555 "lvs/lvol" 00:18:07.555 ], 00:18:07.555 "product_name": "Logical Volume", 00:18:07.555 "block_size": 4096, 00:18:07.555 "num_blocks": 38912, 00:18:07.555 "uuid": "8b56827f-88ae-442b-9668-d9d86e9092e8", 00:18:07.555 "assigned_rate_limits": { 00:18:07.555 "rw_ios_per_sec": 0, 00:18:07.555 "rw_mbytes_per_sec": 0, 00:18:07.555 "r_mbytes_per_sec": 0, 00:18:07.555 "w_mbytes_per_sec": 0 00:18:07.555 }, 00:18:07.555 "claimed": false, 00:18:07.555 "zoned": false, 00:18:07.555 "supported_io_types": { 00:18:07.555 "read": true, 00:18:07.555 "write": true, 00:18:07.555 "unmap": true, 00:18:07.555 "flush": false, 00:18:07.555 "reset": true, 00:18:07.555 "nvme_admin": false, 00:18:07.555 "nvme_io": false, 00:18:07.555 "nvme_io_md": false, 00:18:07.555 "write_zeroes": true, 00:18:07.555 "zcopy": false, 00:18:07.555 "get_zone_info": false, 00:18:07.555 "zone_management": false, 00:18:07.555 "zone_append": false, 00:18:07.555 "compare": false, 00:18:07.556 "compare_and_write": false, 00:18:07.556 "abort": false, 00:18:07.556 "seek_hole": true, 00:18:07.556 "seek_data": true, 00:18:07.556 "copy": false, 00:18:07.556 "nvme_iov_md": false 00:18:07.556 }, 00:18:07.556 "driver_specific": { 00:18:07.556 "lvol": { 00:18:07.556 "lvol_store_uuid": "9bb19df1-6fc7-411d-b9ab-2822a5694716", 00:18:07.556 "base_bdev": "aio_bdev", 00:18:07.556 "thin_provision": false, 00:18:07.556 "num_allocated_clusters": 38, 00:18:07.556 "snapshot": false, 00:18:07.556 "clone": false, 00:18:07.556 "esnap_clone": false 00:18:07.556 } 00:18:07.556 } 00:18:07.556 } 00:18:07.556 ] 00:18:07.814 00:43:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:07.814 00:43:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bb19df1-6fc7-411d-b9ab-2822a5694716 00:18:07.814 00:43:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:07.814 00:43:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:07.814 00:43:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bb19df1-6fc7-411d-b9ab-2822a5694716 00:18:07.814 00:43:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:08.073 00:43:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:08.073 00:43:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8b56827f-88ae-442b-9668-d9d86e9092e8 00:18:08.332 00:43:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9bb19df1-6fc7-411d-b9ab-2822a5694716 00:18:08.332 00:43:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:08.591 00:18:08.591 real 0m16.929s 00:18:08.591 user 0m42.575s 00:18:08.591 sys 0m3.596s 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:08.591 ************************************ 00:18:08.591 END TEST lvs_grow_dirty 00:18:08.591 ************************************ 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:08.591 nvmf_trace.0 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:08.591 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:08.591 rmmod nvme_tcp 00:18:08.850 rmmod nvme_fabrics 00:18:08.850 rmmod nvme_keyring 00:18:08.850 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:08.851 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:08.851 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:08.851 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1369193 ']' 00:18:08.851 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1369193 00:18:08.851 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1369193 ']' 00:18:08.851 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1369193 00:18:08.851 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:18:08.851 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:08.851 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1369193 00:18:08.851 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:08.851 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:08.851 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1369193' 00:18:08.851 killing process with pid 1369193 00:18:08.851 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1369193 00:18:08.851 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1369193 00:18:09.110 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:09.110 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:09.110 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:09.110 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:09.110 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:09.110 00:43:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.110 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.110 00:43:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.015 00:43:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:11.015 00:18:11.015 real 0m41.082s 00:18:11.015 user 1m3.148s 00:18:11.015 sys 0m9.785s 00:18:11.015 00:43:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:11.015 00:43:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:11.015 ************************************ 00:18:11.015 END TEST nvmf_lvs_grow 00:18:11.015 ************************************ 00:18:11.015 00:43:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:11.015 00:43:22 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:11.015 00:43:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:11.015 00:43:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:11.015 00:43:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:11.015 ************************************ 00:18:11.015 START TEST nvmf_bdev_io_wait 00:18:11.015 ************************************ 00:18:11.015 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:11.275 * Looking for test storage... 00:18:11.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:11.275 00:43:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:16.553 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:16.553 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:16.553 Found net devices under 0000:86:00.0: cvl_0_0 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.553 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:16.554 Found net devices under 0000:86:00.1: cvl_0_1 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.554 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:16.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:18:16.868 00:18:16.868 --- 10.0.0.2 ping statistics --- 00:18:16.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.868 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:18:16.868 00:18:16.868 --- 10.0.0.1 ping statistics --- 00:18:16.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.868 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1373316 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1373316 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1373316 ']' 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.868 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:16.868 [2024-07-13 00:43:28.388914] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:16.869 [2024-07-13 00:43:28.388955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.869 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.139 [2024-07-13 00:43:28.458932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.140 [2024-07-13 00:43:28.501053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.140 [2024-07-13 00:43:28.501095] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.140 [2024-07-13 00:43:28.501103] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.140 [2024-07-13 00:43:28.501110] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.140 [2024-07-13 00:43:28.501115] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.140 [2024-07-13 00:43:28.501175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.140 [2024-07-13 00:43:28.501204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.140 [2024-07-13 00:43:28.501334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.140 [2024-07-13 00:43:28.501335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:17.140 [2024-07-13 00:43:28.641721] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:17.140 Malloc0 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.140 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:17.400 [2024-07-13 00:43:28.700007] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1373475 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1373477 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:17.400 { 00:18:17.400 "params": { 00:18:17.400 "name": "Nvme$subsystem", 00:18:17.400 "trtype": "$TEST_TRANSPORT", 00:18:17.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:17.400 "adrfam": "ipv4", 00:18:17.400 "trsvcid": "$NVMF_PORT", 00:18:17.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:17.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:17.400 "hdgst": ${hdgst:-false}, 00:18:17.400 "ddgst": ${ddgst:-false} 00:18:17.400 }, 00:18:17.400 "method": "bdev_nvme_attach_controller" 00:18:17.400 } 00:18:17.400 EOF 00:18:17.400 )") 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1373479 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:17.400 { 00:18:17.400 "params": { 00:18:17.400 "name": "Nvme$subsystem", 00:18:17.400 "trtype": "$TEST_TRANSPORT", 00:18:17.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:17.400 "adrfam": "ipv4", 00:18:17.400 "trsvcid": "$NVMF_PORT", 00:18:17.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:17.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:17.400 "hdgst": ${hdgst:-false}, 00:18:17.400 "ddgst": ${ddgst:-false} 00:18:17.400 }, 00:18:17.400 "method": "bdev_nvme_attach_controller" 00:18:17.400 } 00:18:17.400 EOF 00:18:17.400 )") 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1373482 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:17.400 { 00:18:17.400 "params": { 00:18:17.400 "name": "Nvme$subsystem", 00:18:17.400 "trtype": "$TEST_TRANSPORT", 00:18:17.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:17.400 "adrfam": "ipv4", 00:18:17.400 "trsvcid": "$NVMF_PORT", 00:18:17.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:17.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:17.400 "hdgst": ${hdgst:-false}, 00:18:17.400 "ddgst": ${ddgst:-false} 00:18:17.400 }, 00:18:17.400 "method": "bdev_nvme_attach_controller" 00:18:17.400 } 00:18:17.400 EOF 00:18:17.400 )") 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:17.400 { 00:18:17.400 "params": { 00:18:17.400 "name": "Nvme$subsystem", 00:18:17.400 "trtype": "$TEST_TRANSPORT", 00:18:17.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:17.400 "adrfam": "ipv4", 00:18:17.400 "trsvcid": "$NVMF_PORT", 00:18:17.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:17.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:17.400 "hdgst": ${hdgst:-false}, 00:18:17.400 "ddgst": ${ddgst:-false} 00:18:17.400 }, 00:18:17.400 "method": "bdev_nvme_attach_controller" 00:18:17.400 } 00:18:17.400 EOF 00:18:17.400 )") 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1373475 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:17.400 "params": { 00:18:17.400 "name": "Nvme1", 00:18:17.400 "trtype": "tcp", 00:18:17.400 "traddr": "10.0.0.2", 00:18:17.400 "adrfam": "ipv4", 00:18:17.400 "trsvcid": "4420", 00:18:17.400 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.400 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.400 "hdgst": false, 00:18:17.400 "ddgst": false 00:18:17.400 }, 00:18:17.400 "method": "bdev_nvme_attach_controller" 00:18:17.400 }' 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:17.400 "params": { 00:18:17.400 "name": "Nvme1", 00:18:17.400 "trtype": "tcp", 00:18:17.400 "traddr": "10.0.0.2", 00:18:17.400 "adrfam": "ipv4", 00:18:17.400 "trsvcid": "4420", 00:18:17.400 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.400 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.400 "hdgst": false, 00:18:17.400 "ddgst": false 00:18:17.400 }, 00:18:17.400 "method": "bdev_nvme_attach_controller" 00:18:17.400 }' 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:17.400 "params": { 00:18:17.400 "name": "Nvme1", 00:18:17.400 "trtype": "tcp", 00:18:17.400 "traddr": "10.0.0.2", 00:18:17.400 "adrfam": "ipv4", 00:18:17.400 "trsvcid": "4420", 00:18:17.400 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.400 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.400 "hdgst": false, 00:18:17.400 "ddgst": false 00:18:17.400 }, 00:18:17.400 "method": "bdev_nvme_attach_controller" 00:18:17.400 }' 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:17.400 00:43:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:17.401 "params": { 00:18:17.401 "name": "Nvme1", 00:18:17.401 "trtype": "tcp", 00:18:17.401 "traddr": "10.0.0.2", 00:18:17.401 "adrfam": "ipv4", 00:18:17.401 "trsvcid": "4420", 00:18:17.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.401 "hdgst": false, 00:18:17.401 "ddgst": false 00:18:17.401 }, 00:18:17.401 "method": "bdev_nvme_attach_controller" 00:18:17.401 }' 00:18:17.401 [2024-07-13 00:43:28.750385] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:17.401 [2024-07-13 00:43:28.750388] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:17.401 [2024-07-13 00:43:28.750400] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:17.401 [2024-07-13 00:43:28.750434] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-07-13 00:43:28.750435] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:18:17.401 [2024-07-13 00:43:28.750436] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:17.401 --proc-type=auto ] 00:18:17.401 [2024-07-13 00:43:28.753591] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:17.401 [2024-07-13 00:43:28.753638] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:17.401 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.401 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.401 [2024-07-13 00:43:28.932745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.660 [2024-07-13 00:43:28.960092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:17.660 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.660 [2024-07-13 00:43:29.024373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.660 [2024-07-13 00:43:29.050825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:17.660 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.660 [2024-07-13 00:43:29.124809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.660 [2024-07-13 00:43:29.157718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:18:17.660 [2024-07-13 00:43:29.165938] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.660 [2024-07-13 00:43:29.192583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:17.920 Running I/O for 1 seconds... 00:18:17.920 Running I/O for 1 seconds... 00:18:18.180 Running I/O for 1 seconds... 00:18:18.180 Running I/O for 1 seconds... 00:18:19.119 00:18:19.119 Latency(us) 00:18:19.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.119 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:19.119 Nvme1n1 : 1.01 12048.93 47.07 0.00 0.00 10582.95 6325.65 16754.42 00:18:19.119 =================================================================================================================== 00:18:19.119 Total : 12048.93 47.07 0.00 0.00 10582.95 6325.65 16754.42 00:18:19.119 00:18:19.119 Latency(us) 00:18:19.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.119 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:19.119 Nvme1n1 : 1.01 11018.65 43.04 0.00 0.00 11580.57 5670.29 21883.33 00:18:19.119 =================================================================================================================== 00:18:19.119 Total : 11018.65 43.04 0.00 0.00 11580.57 5670.29 21883.33 00:18:19.119 00:18:19.119 Latency(us) 00:18:19.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.119 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:19.119 Nvme1n1 : 1.00 244480.68 955.00 0.00 0.00 521.67 208.36 637.55 00:18:19.119 =================================================================================================================== 00:18:19.119 Total : 244480.68 955.00 0.00 0.00 521.67 208.36 637.55 00:18:19.119 00:18:19.119 Latency(us) 00:18:19.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.119 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:19.119 Nvme1n1 : 1.00 10394.56 40.60 0.00 0.00 12284.10 4644.51 25416.57 00:18:19.119 =================================================================================================================== 00:18:19.119 Total : 10394.56 40.60 0.00 0.00 12284.10 4644.51 25416.57 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1373477 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1373479 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1373482 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:19.378 rmmod nvme_tcp 00:18:19.378 rmmod nvme_fabrics 00:18:19.378 rmmod nvme_keyring 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:19.378 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:19.379 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1373316 ']' 00:18:19.379 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1373316 00:18:19.379 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1373316 ']' 00:18:19.379 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1373316 00:18:19.379 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:18:19.379 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:19.379 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1373316 00:18:19.379 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:19.379 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:19.379 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1373316' 00:18:19.379 killing process with pid 1373316 00:18:19.379 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1373316 00:18:19.379 00:43:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1373316 00:18:19.638 00:43:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:19.638 00:43:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:19.638 00:43:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:19.638 00:43:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.638 00:43:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.638 00:43:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.638 00:43:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.638 00:43:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.174 00:43:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:22.174 00:18:22.174 real 0m10.588s 00:18:22.174 user 0m17.108s 00:18:22.174 sys 0m5.995s 00:18:22.174 00:43:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:22.174 00:43:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:22.174 ************************************ 00:18:22.174 END TEST nvmf_bdev_io_wait 00:18:22.174 ************************************ 00:18:22.174 00:43:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:22.174 00:43:33 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:22.174 00:43:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:22.174 00:43:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.174 00:43:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:22.174 ************************************ 00:18:22.174 START TEST nvmf_queue_depth 00:18:22.174 ************************************ 00:18:22.174 00:43:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:22.174 * Looking for test storage... 00:18:22.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:22.174 00:43:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.174 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:22.174 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.174 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.174 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.174 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.174 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.174 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.174 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.174 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.174 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:22.175 00:43:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:27.452 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:27.452 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:27.452 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:27.453 Found net devices under 0000:86:00.0: cvl_0_0 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:27.453 Found net devices under 0000:86:00.1: cvl_0_1 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:27.453 00:43:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:27.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:18:27.712 00:18:27.712 --- 10.0.0.2 ping statistics --- 00:18:27.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.712 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:18:27.712 00:18:27.712 --- 10.0.0.1 ping statistics --- 00:18:27.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.712 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1377258 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1377258 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1377258 ']' 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.712 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.712 [2024-07-13 00:43:39.172608] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:27.712 [2024-07-13 00:43:39.172655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.713 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.713 [2024-07-13 00:43:39.244725] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.972 [2024-07-13 00:43:39.284717] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.972 [2024-07-13 00:43:39.284755] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.972 [2024-07-13 00:43:39.284761] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.972 [2024-07-13 00:43:39.284768] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.972 [2024-07-13 00:43:39.284772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.972 [2024-07-13 00:43:39.284806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.972 [2024-07-13 00:43:39.412929] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.972 Malloc0 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.972 [2024-07-13 00:43:39.480735] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1377284 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1377284 /var/tmp/bdevperf.sock 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1377284 ']' 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.972 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:27.972 [2024-07-13 00:43:39.529449] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:27.972 [2024-07-13 00:43:39.529487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377284 ] 00:18:28.231 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.231 [2024-07-13 00:43:39.597481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.231 [2024-07-13 00:43:39.637676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.231 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.231 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:28.231 00:43:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:28.231 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.231 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:28.490 NVMe0n1 00:18:28.490 00:43:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.490 00:43:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:28.490 Running I/O for 10 seconds... 00:18:38.468 00:18:38.468 Latency(us) 00:18:38.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.468 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:38.468 Verification LBA range: start 0x0 length 0x4000 00:18:38.468 NVMe0n1 : 10.06 12357.68 48.27 0.00 0.00 82556.20 19603.81 58355.53 00:18:38.468 =================================================================================================================== 00:18:38.468 Total : 12357.68 48.27 0.00 0.00 82556.20 19603.81 58355.53 00:18:38.468 0 00:18:38.728 00:43:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1377284 00:18:38.728 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1377284 ']' 00:18:38.728 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1377284 00:18:38.728 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:38.728 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.728 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1377284 00:18:38.728 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:38.728 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:38.728 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1377284' 00:18:38.728 killing process with pid 1377284 00:18:38.728 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1377284 00:18:38.729 Received shutdown signal, test time was about 10.000000 seconds 00:18:38.729 00:18:38.729 Latency(us) 00:18:38.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.729 =================================================================================================================== 00:18:38.729 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.729 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1377284 00:18:38.729 00:43:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:38.729 00:43:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:38.729 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:38.729 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:38.729 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:38.729 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:38.729 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:38.729 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:38.729 rmmod nvme_tcp 00:18:38.729 rmmod nvme_fabrics 00:18:38.989 rmmod nvme_keyring 00:18:38.989 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:38.989 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:38.989 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:38.989 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1377258 ']' 00:18:38.989 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1377258 00:18:38.989 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1377258 ']' 00:18:38.989 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1377258 00:18:38.989 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:38.989 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.989 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1377258 00:18:38.989 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:38.989 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:38.989 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1377258' 00:18:38.989 killing process with pid 1377258 00:18:38.989 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1377258 00:18:38.989 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1377258 00:18:39.249 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:39.249 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:39.249 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:39.249 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:39.249 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:39.249 00:43:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.249 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.249 00:43:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.156 00:43:52 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:41.156 00:18:41.156 real 0m19.416s 00:18:41.156 user 0m22.684s 00:18:41.156 sys 0m6.015s 00:18:41.156 00:43:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:41.156 00:43:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:41.156 ************************************ 00:18:41.156 END TEST nvmf_queue_depth 00:18:41.156 ************************************ 00:18:41.156 00:43:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:41.156 00:43:52 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:41.156 00:43:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:41.156 00:43:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:41.156 00:43:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:41.156 ************************************ 00:18:41.156 START TEST nvmf_target_multipath 00:18:41.156 ************************************ 00:18:41.156 00:43:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:41.416 * Looking for test storage... 00:18:41.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:41.416 00:43:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:41.416 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:41.416 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.416 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.416 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.416 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.416 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.416 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.416 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.416 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.416 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.416 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:41.417 00:43:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:47.989 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:47.989 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:47.989 Found net devices under 0000:86:00.0: cvl_0_0 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:47.989 Found net devices under 0000:86:00.1: cvl_0_1 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.989 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:47.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:18:47.989 00:18:47.989 --- 10.0.0.2 ping statistics --- 00:18:47.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.990 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:18:47.990 00:18:47.990 --- 10.0.0.1 ping statistics --- 00:18:47.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.990 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:47.990 only one NIC for nvmf test 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:47.990 rmmod nvme_tcp 00:18:47.990 rmmod nvme_fabrics 00:18:47.990 rmmod nvme_keyring 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.990 00:43:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.418 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:49.418 00:44:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:49.418 00:44:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:49.418 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:49.418 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:49.418 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:49.418 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:49.418 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.418 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:49.418 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.418 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:49.418 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:49.418 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:49.418 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:49.419 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:49.419 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:49.419 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:49.419 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:49.419 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.419 00:44:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.419 00:44:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.419 00:44:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:49.419 00:18:49.419 real 0m8.071s 00:18:49.419 user 0m1.647s 00:18:49.419 sys 0m4.405s 00:18:49.419 00:44:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:49.419 00:44:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:49.419 ************************************ 00:18:49.419 END TEST nvmf_target_multipath 00:18:49.419 ************************************ 00:18:49.419 00:44:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:49.419 00:44:00 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:49.419 00:44:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:49.419 00:44:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.419 00:44:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:49.419 ************************************ 00:18:49.419 START TEST nvmf_zcopy 00:18:49.419 ************************************ 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:49.419 * Looking for test storage... 00:18:49.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:49.419 00:44:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:55.992 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.992 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:55.992 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:55.992 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:55.992 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:55.993 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:55.993 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:55.993 Found net devices under 0000:86:00.0: cvl_0_0 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:55.993 Found net devices under 0000:86:00.1: cvl_0_1 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:55.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:18:55.993 00:18:55.993 --- 10.0.0.2 ping statistics --- 00:18:55.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.993 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:18:55.993 00:18:55.993 --- 10.0.0.1 ping statistics --- 00:18:55.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.993 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1385945 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1385945 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1385945 ']' 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.993 00:44:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:55.993 [2024-07-13 00:44:06.788618] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:55.993 [2024-07-13 00:44:06.788665] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.993 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.993 [2024-07-13 00:44:06.860440] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.993 [2024-07-13 00:44:06.900003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.993 [2024-07-13 00:44:06.900040] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.993 [2024-07-13 00:44:06.900048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.993 [2024-07-13 00:44:06.900054] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.993 [2024-07-13 00:44:06.900059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.994 [2024-07-13 00:44:06.900077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.994 00:44:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.994 00:44:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:55.994 00:44:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:55.994 00:44:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:55.994 00:44:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:55.994 [2024-07-13 00:44:07.028236] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:55.994 [2024-07-13 00:44:07.048387] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:55.994 malloc0 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:55.994 { 00:18:55.994 "params": { 00:18:55.994 "name": "Nvme$subsystem", 00:18:55.994 "trtype": "$TEST_TRANSPORT", 00:18:55.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:55.994 "adrfam": "ipv4", 00:18:55.994 "trsvcid": "$NVMF_PORT", 00:18:55.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:55.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:55.994 "hdgst": ${hdgst:-false}, 00:18:55.994 "ddgst": ${ddgst:-false} 00:18:55.994 }, 00:18:55.994 "method": "bdev_nvme_attach_controller" 00:18:55.994 } 00:18:55.994 EOF 00:18:55.994 )") 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:55.994 00:44:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:55.994 "params": { 00:18:55.994 "name": "Nvme1", 00:18:55.994 "trtype": "tcp", 00:18:55.994 "traddr": "10.0.0.2", 00:18:55.994 "adrfam": "ipv4", 00:18:55.994 "trsvcid": "4420", 00:18:55.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.994 "hdgst": false, 00:18:55.994 "ddgst": false 00:18:55.994 }, 00:18:55.994 "method": "bdev_nvme_attach_controller" 00:18:55.994 }' 00:18:55.994 [2024-07-13 00:44:07.125505] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:55.994 [2024-07-13 00:44:07.125544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386157 ] 00:18:55.994 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.994 [2024-07-13 00:44:07.190791] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.994 [2024-07-13 00:44:07.231212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.994 Running I/O for 10 seconds... 00:19:05.970 00:19:05.970 Latency(us) 00:19:05.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.970 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:05.970 Verification LBA range: start 0x0 length 0x1000 00:19:05.970 Nvme1n1 : 10.01 8719.54 68.12 0.00 0.00 14637.54 2236.77 24732.72 00:19:05.970 =================================================================================================================== 00:19:05.970 Total : 8719.54 68.12 0.00 0.00 14637.54 2236.77 24732.72 00:19:06.230 00:44:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1387762 00:19:06.230 00:44:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:19:06.230 00:44:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:06.230 00:44:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:06.230 00:44:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:06.230 00:44:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:06.230 00:44:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:06.230 00:44:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:06.230 00:44:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:06.230 { 00:19:06.230 "params": { 00:19:06.230 "name": "Nvme$subsystem", 00:19:06.230 "trtype": "$TEST_TRANSPORT", 00:19:06.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:06.230 "adrfam": "ipv4", 00:19:06.230 "trsvcid": "$NVMF_PORT", 00:19:06.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:06.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:06.230 "hdgst": ${hdgst:-false}, 00:19:06.230 "ddgst": ${ddgst:-false} 00:19:06.230 }, 00:19:06.230 "method": "bdev_nvme_attach_controller" 00:19:06.230 } 00:19:06.230 EOF 00:19:06.230 )") 00:19:06.230 [2024-07-13 00:44:17.601482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.601515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 00:44:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:06.230 00:44:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:06.230 [2024-07-13 00:44:17.609464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.609475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 00:44:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:06.230 00:44:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:06.230 "params": { 00:19:06.230 "name": "Nvme1", 00:19:06.230 "trtype": "tcp", 00:19:06.230 "traddr": "10.0.0.2", 00:19:06.230 "adrfam": "ipv4", 00:19:06.230 "trsvcid": "4420", 00:19:06.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.230 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:06.230 "hdgst": false, 00:19:06.230 "ddgst": false 00:19:06.230 }, 00:19:06.230 "method": "bdev_nvme_attach_controller" 00:19:06.230 }' 00:19:06.230 [2024-07-13 00:44:17.617480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.617490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 [2024-07-13 00:44:17.625502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.625511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 [2024-07-13 00:44:17.633525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.633534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 [2024-07-13 00:44:17.636937] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:06.230 [2024-07-13 00:44:17.636982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1387762 ] 00:19:06.230 [2024-07-13 00:44:17.641546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.641556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 [2024-07-13 00:44:17.653580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.653590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.230 [2024-07-13 00:44:17.661600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.661610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 [2024-07-13 00:44:17.673636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.673646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 [2024-07-13 00:44:17.681651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.681660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 [2024-07-13 00:44:17.689673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.689681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 [2024-07-13 00:44:17.701706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.701716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 [2024-07-13 00:44:17.702149] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.230 [2024-07-13 00:44:17.709727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.709740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 [2024-07-13 00:44:17.717746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.717757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 [2024-07-13 00:44:17.729789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.729812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 [2024-07-13 00:44:17.741392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.230 [2024-07-13 00:44:17.741817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.741828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 [2024-07-13 00:44:17.753857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.753873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 [2024-07-13 00:44:17.761877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.230 [2024-07-13 00:44:17.761894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.230 [2024-07-13 00:44:17.769890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.231 [2024-07-13 00:44:17.769902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.231 [2024-07-13 00:44:17.777909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.231 [2024-07-13 00:44:17.777920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.231 [2024-07-13 00:44:17.785931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.231 [2024-07-13 00:44:17.785942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.797963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.797975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.805985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.805996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.814004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.814014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.822049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.822078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.830054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.830067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.842086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.842100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.850109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.850123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.858128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.858138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.866149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.866158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.874172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.874181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.886213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.886232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.894238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.894250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.902259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.902271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.910277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.910286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.918294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.918303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.930330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.930338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.938351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.938361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.946376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.946388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.954395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.954404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.962416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.962426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.974448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.974459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.982473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.982496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.990495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.990511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:17.998516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:17.998527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:18.006538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:18.006547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:18.018572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:18.018582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:18.026595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:18.026606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.490 [2024-07-13 00:44:18.034617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.490 [2024-07-13 00:44:18.034627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.749 [2024-07-13 00:44:18.078381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.749 [2024-07-13 00:44:18.078400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.749 [2024-07-13 00:44:18.086762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.749 [2024-07-13 00:44:18.086774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.749 Running I/O for 5 seconds... 00:19:06.749 [2024-07-13 00:44:18.102232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.749 [2024-07-13 00:44:18.102252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.749 [2024-07-13 00:44:18.109827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.749 [2024-07-13 00:44:18.109846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.749 [2024-07-13 00:44:18.120102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.749 [2024-07-13 00:44:18.120124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.749 [2024-07-13 00:44:18.129105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.749 [2024-07-13 00:44:18.129123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.749 [2024-07-13 00:44:18.138989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.749 [2024-07-13 00:44:18.139009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.749 [2024-07-13 00:44:18.153365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.749 [2024-07-13 00:44:18.153384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.749 [2024-07-13 00:44:18.162498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.749 [2024-07-13 00:44:18.162518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.749 [2024-07-13 00:44:18.171843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.749 [2024-07-13 00:44:18.171862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.749 [2024-07-13 00:44:18.181269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.749 [2024-07-13 00:44:18.181288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.749 [2024-07-13 00:44:18.189924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.749 [2024-07-13 00:44:18.189941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.749 [2024-07-13 00:44:18.204742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.749 [2024-07-13 00:44:18.204761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.750 [2024-07-13 00:44:18.213773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.750 [2024-07-13 00:44:18.213819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.750 [2024-07-13 00:44:18.222772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.750 [2024-07-13 00:44:18.222790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.750 [2024-07-13 00:44:18.232254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.750 [2024-07-13 00:44:18.232272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.750 [2024-07-13 00:44:18.247328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.750 [2024-07-13 00:44:18.247346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.750 [2024-07-13 00:44:18.263302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.750 [2024-07-13 00:44:18.263320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.750 [2024-07-13 00:44:18.272259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.750 [2024-07-13 00:44:18.272278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.750 [2024-07-13 00:44:18.281621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.750 [2024-07-13 00:44:18.281639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.750 [2024-07-13 00:44:18.291149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.750 [2024-07-13 00:44:18.291167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:06.750 [2024-07-13 00:44:18.299940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:06.750 [2024-07-13 00:44:18.299959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.314528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.314547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.323640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.323659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.333088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.333106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.341791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.341808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.351163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.351181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.365790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.365808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.374789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.374807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.383423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.383440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.393506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.393524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.402280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.402298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.417038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.417056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.426061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.426078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.435094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.435112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.444975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.444992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.453907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.453925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.468465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.468493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.477518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.477535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.486334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.486352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.495635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.495653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.505608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.505625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.514918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.514937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.524298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.524316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.533530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.533548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.542685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.542702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.552148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.552165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.009 [2024-07-13 00:44:18.561458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.009 [2024-07-13 00:44:18.561475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.268 [2024-07-13 00:44:18.570687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.570704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.579238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.579256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.587829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.587846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.597126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.597144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.606347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.606364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.615689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.615707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.624867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.624884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.634069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.634086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.642802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.642819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.652132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.652150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.661272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.661290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.670540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.670558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.679836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.679854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.689154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.689171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.698607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.698624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.708052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.708071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.717085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.717102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.726184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.726202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.735548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.735566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.744965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.744983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.754140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.754157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.763289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.763307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.772046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.772063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.778898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.778915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.794277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.794295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.803174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.803191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.812023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.812041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.269 [2024-07-13 00:44:18.821456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.269 [2024-07-13 00:44:18.821474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.527 [2024-07-13 00:44:18.830915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.527 [2024-07-13 00:44:18.830933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.527 [2024-07-13 00:44:18.840122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.527 [2024-07-13 00:44:18.840139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.527 [2024-07-13 00:44:18.849295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.527 [2024-07-13 00:44:18.849312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.527 [2024-07-13 00:44:18.857958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.527 [2024-07-13 00:44:18.857976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.527 [2024-07-13 00:44:18.867381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.527 [2024-07-13 00:44:18.867398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.527 [2024-07-13 00:44:18.876642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.527 [2024-07-13 00:44:18.876660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.527 [2024-07-13 00:44:18.891024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.527 [2024-07-13 00:44:18.891043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.527 [2024-07-13 00:44:18.898783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.527 [2024-07-13 00:44:18.898801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.527 [2024-07-13 00:44:18.906669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.527 [2024-07-13 00:44:18.906687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.527 [2024-07-13 00:44:18.917082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.527 [2024-07-13 00:44:18.917100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.527 [2024-07-13 00:44:18.924789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.527 [2024-07-13 00:44:18.924807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:18.939487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:18.939505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:18.948563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:18.948581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:18.958057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:18.958076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:18.966729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:18.966747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:18.975477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:18.975495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:18.985309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:18.985327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:18.994092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:18.994109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:19.002746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:19.002764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:19.012050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:19.012068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:19.021270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:19.021288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:19.036405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:19.036422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:19.046811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:19.046829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:19.055481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:19.055498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:19.064869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:19.064886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:19.074158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:19.074176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.528 [2024-07-13 00:44:19.082867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.528 [2024-07-13 00:44:19.082886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.786 [2024-07-13 00:44:19.092244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.092263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.100860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.100878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.109508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.109526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.118150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.118167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.127681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.127703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.137241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.137259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.146132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.146152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.154687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.154707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.163233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.163251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.177655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.177673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.186400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.186419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.195315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.195333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.204752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.204770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.214482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.214499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.229118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.229136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.238129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.238146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.247248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.247283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.256078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.256096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.265285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.265303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.274048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.274065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.282585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.282602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.291855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.291873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.301062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.301080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.309722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.309744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.319044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.319062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.327789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.327806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.336609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.336627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:07.787 [2024-07-13 00:44:19.345254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:07.787 [2024-07-13 00:44:19.345272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.354517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.354535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.363843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.363862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.373293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.373312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.382050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.382068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.389068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.389087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.400445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.400464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.409579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.409597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.418427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.418445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.427700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.427719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.434722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.434740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.445109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.445127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.454412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.454430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.463124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.463142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.472552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.472570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.481281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.481304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.490650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.490667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.505635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.505654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.513285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.513303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.522296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.522314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.530962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.530981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.539571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.539588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.554393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.554413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.563274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.563292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.571932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.571950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.581366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.581384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.045 [2024-07-13 00:44:19.590560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.045 [2024-07-13 00:44:19.590578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.303 [2024-07-13 00:44:19.605004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.303 [2024-07-13 00:44:19.605023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.303 [2024-07-13 00:44:19.613788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.303 [2024-07-13 00:44:19.613807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.303 [2024-07-13 00:44:19.620714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.303 [2024-07-13 00:44:19.620732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.303 [2024-07-13 00:44:19.631158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.303 [2024-07-13 00:44:19.631176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.303 [2024-07-13 00:44:19.640122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.303 [2024-07-13 00:44:19.640139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.303 [2024-07-13 00:44:19.649452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.303 [2024-07-13 00:44:19.649470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.303 [2024-07-13 00:44:19.658824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.303 [2024-07-13 00:44:19.658842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.668211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.668240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.677466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.677485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.686561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.686579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.695678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.695696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.704708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.704726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.713410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.713428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.722130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.722148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.731343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.731361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.740185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.740203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.749556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.749575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.758389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.758407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.767823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.767841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.777267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.777285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.786658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.786676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.795507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.795525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.804087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.804105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.813328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.813345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.822585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.822603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.831968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.831987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.841564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.841581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.850334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.850351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.304 [2024-07-13 00:44:19.857784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.304 [2024-07-13 00:44:19.857801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:19.868795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:19.868813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:19.878387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:19.878405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:19.887593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:19.887611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:19.896287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:19.896304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:19.905426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:19.905444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:19.914694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:19.914712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:19.924514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:19.924531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:19.933023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:19.933041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:19.942344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:19.942361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:19.950828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:19.950846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:19.959386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:19.959403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:19.968733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:19.968751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:19.978114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:19.978132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:19.987395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:19.987412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:19.996820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:19.996838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.005896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.005914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.015400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.015419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.024698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.024716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.034454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.034472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.043848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.043866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.052543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.052561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.061976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.061994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.071471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.071499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.080070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.080088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.089532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.089550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.098897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.098914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.108602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.108620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.117532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.117557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.126944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.126962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.136298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.136316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.145557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.145574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.159785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.159803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.614 [2024-07-13 00:44:20.168720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.614 [2024-07-13 00:44:20.168739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.177249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.177269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.186457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.186474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.195931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.195949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.210702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.210720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.219669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.219687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.228444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.228461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.237696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.237715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.246877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.246895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.261552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.261570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.270403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.270421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.279612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.279630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.288990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.289008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.298244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.298262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.312959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.312977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.320287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.320305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.329511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.329530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.338161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.338179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.347504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.347521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.356761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.356779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.365893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.365911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.375210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.375233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.384458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.384475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.393649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.393666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.403091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.403108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.413044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.413062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.421583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.421600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:08.872 [2024-07-13 00:44:20.430723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:08.872 [2024-07-13 00:44:20.430740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.439932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.439951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.449128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.449146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.458460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.458477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.466987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.467004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.476356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.476374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.485514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.485531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.494387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.494405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.503204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.503222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.511902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.511920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.521006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.521024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.530252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.530269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.544817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.544834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.553927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.553949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.563958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.563975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.570939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.570956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.581925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.581942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.590730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.590747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.599684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.599702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.608578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.608596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.617826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.617843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.627292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.131 [2024-07-13 00:44:20.627310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.131 [2024-07-13 00:44:20.636152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.132 [2024-07-13 00:44:20.636170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.132 [2024-07-13 00:44:20.645436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.132 [2024-07-13 00:44:20.645453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.132 [2024-07-13 00:44:20.654079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.132 [2024-07-13 00:44:20.654096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.132 [2024-07-13 00:44:20.663338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.132 [2024-07-13 00:44:20.663356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.132 [2024-07-13 00:44:20.672573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.132 [2024-07-13 00:44:20.672590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.132 [2024-07-13 00:44:20.681817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.132 [2024-07-13 00:44:20.681834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.690541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.690560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.699985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.700002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.709357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.709374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.718088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.718105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.732868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.732889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.741930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.741947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.751432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.751450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.760726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.760745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.769617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.769635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.778504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.778523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.787840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.787859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.797391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.797411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.806052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.806069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.815461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.815480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.830136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.830155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.839237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.839255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.848765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.848783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.858164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.858182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.867371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.867390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.882202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.882220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.889816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.889833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.898816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.898834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.908374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.908391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.916952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.916973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.926201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.926219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.935431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.935449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.391 [2024-07-13 00:44:20.944928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.391 [2024-07-13 00:44:20.944946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:20.954158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:20.954177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:20.962961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:20.962979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:20.972262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:20.972280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:20.981697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:20.981715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:20.991094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:20.991113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.000282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.000300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.009753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.009771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.019165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.019182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.026152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.026169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.037107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.037126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.045948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.045966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.055250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.055268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.064432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.064450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.073726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.073744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.083030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.083048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.092268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.092291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.101096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.101114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.110322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.110341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.119005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.119022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.127538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.127556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.136838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.136856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.145813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.145832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.155088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.155106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.164362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.164380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.173713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.173731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.183028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.183046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.192324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.192344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.650 [2024-07-13 00:44:21.201706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.650 [2024-07-13 00:44:21.201723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.211119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.211138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.220871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.220889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.229541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.229559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.238857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.238875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.252862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.252880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.262000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.262019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.270679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.270698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.280046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.280064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.286935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.286952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.302666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.302685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.311336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.311353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.320511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.320528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.329373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.329391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.338588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.338605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.347412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.347430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.356016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.356034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.364674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.364692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.374027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.374045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.909 [2024-07-13 00:44:21.383297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.909 [2024-07-13 00:44:21.383314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.910 [2024-07-13 00:44:21.398057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.910 [2024-07-13 00:44:21.398075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.910 [2024-07-13 00:44:21.407076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.910 [2024-07-13 00:44:21.407093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.910 [2024-07-13 00:44:21.416790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.910 [2024-07-13 00:44:21.416807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.910 [2024-07-13 00:44:21.425704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.910 [2024-07-13 00:44:21.425722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.910 [2024-07-13 00:44:21.435066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.910 [2024-07-13 00:44:21.435083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.910 [2024-07-13 00:44:21.450100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.910 [2024-07-13 00:44:21.450118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.910 [2024-07-13 00:44:21.460352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:09.910 [2024-07-13 00:44:21.460371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.469190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.469209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.478497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.478515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.487860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.487878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.502181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.502199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.509939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.509957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.518941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.518959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.527417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.527434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.536820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.536838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.551308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.551327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.560257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.560275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.568966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.568985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.578721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.578738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.587220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.587242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.596402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.596420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.605555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.605572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.614850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.614868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.624199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.624217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.632827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.632846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.642132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.642150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.651408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.651426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.660617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.660635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.675131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.675151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.684266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.684284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.698628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.698646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.706332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.706349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.716516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.716534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.169 [2024-07-13 00:44:21.725557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.169 [2024-07-13 00:44:21.725576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.428 [2024-07-13 00:44:21.735000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.428 [2024-07-13 00:44:21.735019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.428 [2024-07-13 00:44:21.749663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.428 [2024-07-13 00:44:21.749681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.428 [2024-07-13 00:44:21.758651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.428 [2024-07-13 00:44:21.758669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.428 [2024-07-13 00:44:21.768015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.428 [2024-07-13 00:44:21.768033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.428 [2024-07-13 00:44:21.776617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.428 [2024-07-13 00:44:21.776634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.428 [2024-07-13 00:44:21.785925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.428 [2024-07-13 00:44:21.785942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.428 [2024-07-13 00:44:21.800672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.428 [2024-07-13 00:44:21.800690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.428 [2024-07-13 00:44:21.814757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.428 [2024-07-13 00:44:21.814775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.428 [2024-07-13 00:44:21.823823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.428 [2024-07-13 00:44:21.823840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.832561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.832578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.842589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.842606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.856735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.856753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.865718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.865735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.874934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.874951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.884128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.884146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.893370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.893388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.908202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.908220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.915799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.915817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.924484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.924506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.933814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.933832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.943174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.943191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.960738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.960756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.969583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.969601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.978294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.978312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.429 [2024-07-13 00:44:21.987112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.429 [2024-07-13 00:44:21.987130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:21.996350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:21.996368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.005661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.005679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.014698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.014716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.023441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.023463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.032597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.032615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.047348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.047367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.056160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.056178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.064987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.065005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.074078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.074095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.082745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.082763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.097521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.097539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.106583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.106600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.116582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.116599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.125304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.125322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.134126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.134143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.148927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.148944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.156721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.156738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.165647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.165664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.174507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.174526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.183636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.183654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.688 [2024-07-13 00:44:22.198531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.688 [2024-07-13 00:44:22.198551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.689 [2024-07-13 00:44:22.209223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.689 [2024-07-13 00:44:22.209249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.689 [2024-07-13 00:44:22.218816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.689 [2024-07-13 00:44:22.218840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.689 [2024-07-13 00:44:22.227556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.689 [2024-07-13 00:44:22.227577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.689 [2024-07-13 00:44:22.236796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.689 [2024-07-13 00:44:22.236814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.251213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.251240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.258868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.258887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.268070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.268089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.277109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.277127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.286494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.286512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.295835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.295853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.305032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.305050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.313705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.313724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.322930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.322948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.332251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.332270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.341635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.341654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.350835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.350853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.359427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.359445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.368697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.368715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.377890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.377908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.387030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.387048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.395564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.395588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.404185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.404203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.413454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.413472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.422089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.422107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.437118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.437136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.447279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.447297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.456045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.456063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.465201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.465219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.473890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.473907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.482786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.482804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.491926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.491943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:10.948 [2024-07-13 00:44:22.501121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:10.948 [2024-07-13 00:44:22.501139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.208 [2024-07-13 00:44:22.510261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.208 [2024-07-13 00:44:22.510280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.208 [2024-07-13 00:44:22.518975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.208 [2024-07-13 00:44:22.518993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.208 [2024-07-13 00:44:22.533625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.208 [2024-07-13 00:44:22.533643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.208 [2024-07-13 00:44:22.542410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.208 [2024-07-13 00:44:22.542428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.208 [2024-07-13 00:44:22.551591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.208 [2024-07-13 00:44:22.551608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.208 [2024-07-13 00:44:22.560725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.208 [2024-07-13 00:44:22.560743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.208 [2024-07-13 00:44:22.569845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.208 [2024-07-13 00:44:22.569862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.208 [2024-07-13 00:44:22.579076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.208 [2024-07-13 00:44:22.579098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.208 [2024-07-13 00:44:22.588373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.208 [2024-07-13 00:44:22.588391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.208 [2024-07-13 00:44:22.597144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.208 [2024-07-13 00:44:22.597162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.208 [2024-07-13 00:44:22.606909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.208 [2024-07-13 00:44:22.606926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.208 [2024-07-13 00:44:22.616235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.208 [2024-07-13 00:44:22.616252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.208 [2024-07-13 00:44:22.630742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.208 [2024-07-13 00:44:22.630761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.208 [2024-07-13 00:44:22.638325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.208 [2024-07-13 00:44:22.638343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.209 [2024-07-13 00:44:22.648534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.209 [2024-07-13 00:44:22.648552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.209 [2024-07-13 00:44:22.657671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.209 [2024-07-13 00:44:22.657688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.209 [2024-07-13 00:44:22.666444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.209 [2024-07-13 00:44:22.666461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.209 [2024-07-13 00:44:22.675895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.209 [2024-07-13 00:44:22.675913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.209 [2024-07-13 00:44:22.685125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.209 [2024-07-13 00:44:22.685143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.209 [2024-07-13 00:44:22.694512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.209 [2024-07-13 00:44:22.694529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.209 [2024-07-13 00:44:22.703351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.209 [2024-07-13 00:44:22.703369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.209 [2024-07-13 00:44:22.712011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.209 [2024-07-13 00:44:22.712029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.209 [2024-07-13 00:44:22.721303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.209 [2024-07-13 00:44:22.721321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.209 [2024-07-13 00:44:22.730620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.209 [2024-07-13 00:44:22.730638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.209 [2024-07-13 00:44:22.739827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.209 [2024-07-13 00:44:22.739845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.209 [2024-07-13 00:44:22.748911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.209 [2024-07-13 00:44:22.748929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.209 [2024-07-13 00:44:22.758255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.209 [2024-07-13 00:44:22.758273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.773163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.773181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.780888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.780905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.790358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.790375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.799209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.799232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.807771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.807788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.822550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.822567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.830125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.830142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.838967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.838985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.847708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.847725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.857211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.857235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.866530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.866547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.875677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.875695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.885118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.885135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.893658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.893675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.902336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.902354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.916602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.916620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.925557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.925575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.934171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.934188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.943411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.943429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.952661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.952679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.961947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.961965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.970555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.970572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.979265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.979282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.987886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.987904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:22.997171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:22.997189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:23.011841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:23.011858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.468 [2024-07-13 00:44:23.020667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.468 [2024-07-13 00:44:23.020685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.029872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.029891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.039129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.039146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.048338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.048356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.058133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.058150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.066936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.066953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.076361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.076379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.085177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.085194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.094612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.094629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.108587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.108605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 00:19:11.727 Latency(us) 00:19:11.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.727 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:11.727 Nvme1n1 : 5.01 16626.67 129.90 0.00 0.00 7690.66 3276.80 15728.64 00:19:11.727 =================================================================================================================== 00:19:11.727 Total : 16626.67 129.90 0.00 0.00 7690.66 3276.80 15728.64 00:19:11.727 [2024-07-13 00:44:23.113672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.113688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.121692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.121707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.129710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.129720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.137747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.137763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.149775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.149790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.157790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.157805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.165807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.165820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.173827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.173840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.181849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.181861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.193883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.193898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.201904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.201917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.209925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.209937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.217946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.217955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.225967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.225981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.238002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.238019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.246019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.246029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.254039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.254054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.262063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.262076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.270085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.270096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.727 [2024-07-13 00:44:23.282120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:11.727 [2024-07-13 00:44:23.282129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1387762) - No such process 00:19:11.986 00:44:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1387762 00:19:11.986 00:44:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:11.986 00:44:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.986 00:44:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:11.986 00:44:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.986 00:44:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:11.986 00:44:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.986 00:44:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:11.986 delay0 00:19:11.986 00:44:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.986 00:44:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:11.986 00:44:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.986 00:44:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:11.986 00:44:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.986 00:44:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:11.986 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.986 [2024-07-13 00:44:23.464377] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:18.558 [2024-07-13 00:44:29.837128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x637fd0 is same with the state(5) to be set 00:19:18.558 [2024-07-13 00:44:29.837171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x637fd0 is same with the state(5) to be set 00:19:18.558 Initializing NVMe Controllers 00:19:18.558 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:18.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:18.558 Initialization complete. Launching workers. 00:19:18.558 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1328 00:19:18.558 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1606, failed to submit 42 00:19:18.558 success 1414, unsuccess 192, failed 0 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:18.558 rmmod nvme_tcp 00:19:18.558 rmmod nvme_fabrics 00:19:18.558 rmmod nvme_keyring 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1385945 ']' 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1385945 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1385945 ']' 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1385945 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1385945 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1385945' 00:19:18.558 killing process with pid 1385945 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1385945 00:19:18.558 00:44:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1385945 00:19:18.817 00:44:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:18.817 00:44:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:18.817 00:44:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:18.817 00:44:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:18.817 00:44:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:18.817 00:44:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.817 00:44:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.817 00:44:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.770 00:44:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:20.770 00:19:20.770 real 0m31.370s 00:19:20.770 user 0m42.718s 00:19:20.771 sys 0m10.723s 00:19:20.771 00:44:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:20.771 00:44:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:20.771 ************************************ 00:19:20.771 END TEST nvmf_zcopy 00:19:20.771 ************************************ 00:19:20.771 00:44:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:20.771 00:44:32 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:20.771 00:44:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:20.771 00:44:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.771 00:44:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:20.771 ************************************ 00:19:20.771 START TEST nvmf_nmic 00:19:20.771 ************************************ 00:19:20.771 00:44:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:21.031 * Looking for test storage... 00:19:21.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:21.031 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:21.032 00:44:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:21.032 00:44:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:21.032 00:44:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:21.032 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:21.032 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.032 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:21.032 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:21.032 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:21.032 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.032 00:44:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.032 00:44:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.032 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:21.032 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:21.032 00:44:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:21.032 00:44:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:27.602 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.602 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:27.602 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:27.603 Found net devices under 0000:86:00.0: cvl_0_0 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:27.603 Found net devices under 0000:86:00.1: cvl_0_1 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:27.603 00:44:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:27.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:19:27.603 00:19:27.603 --- 10.0.0.2 ping statistics --- 00:19:27.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.603 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:27.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:19:27.603 00:19:27.603 --- 10.0.0.1 ping statistics --- 00:19:27.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.603 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1393329 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1393329 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1393329 ']' 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:27.603 [2024-07-13 00:44:38.268562] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:27.603 [2024-07-13 00:44:38.268610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.603 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.603 [2024-07-13 00:44:38.340189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:27.603 [2024-07-13 00:44:38.382646] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.603 [2024-07-13 00:44:38.382687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.603 [2024-07-13 00:44:38.382694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.603 [2024-07-13 00:44:38.382699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.603 [2024-07-13 00:44:38.382704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.603 [2024-07-13 00:44:38.382761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.603 [2024-07-13 00:44:38.382892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.603 [2024-07-13 00:44:38.382996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.603 [2024-07-13 00:44:38.382998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:27.603 [2024-07-13 00:44:38.523175] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:27.603 Malloc0 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:27.603 [2024-07-13 00:44:38.566795] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:27.603 test case1: single bdev can't be used in multiple subsystems 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:27.603 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:27.604 [2024-07-13 00:44:38.590711] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:27.604 [2024-07-13 00:44:38.590733] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:27.604 [2024-07-13 00:44:38.590741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.604 request: 00:19:27.604 { 00:19:27.604 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:27.604 "namespace": { 00:19:27.604 "bdev_name": "Malloc0", 00:19:27.604 "no_auto_visible": false 00:19:27.604 }, 00:19:27.604 "method": "nvmf_subsystem_add_ns", 00:19:27.604 "req_id": 1 00:19:27.604 } 00:19:27.604 Got JSON-RPC error response 00:19:27.604 response: 00:19:27.604 { 00:19:27.604 "code": -32602, 00:19:27.604 "message": "Invalid parameters" 00:19:27.604 } 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:27.604 Adding namespace failed - expected result. 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:27.604 test case2: host connect to nvmf target in multiple paths 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:27.604 [2024-07-13 00:44:38.598828] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.604 00:44:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:28.542 00:44:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:29.480 00:44:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:29.480 00:44:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:19:29.480 00:44:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:29.480 00:44:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:29.480 00:44:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:19:31.385 00:44:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:31.385 00:44:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:31.385 00:44:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:31.385 00:44:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:31.385 00:44:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:31.385 00:44:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:19:31.385 00:44:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:31.385 [global] 00:19:31.385 thread=1 00:19:31.385 invalidate=1 00:19:31.385 rw=write 00:19:31.385 time_based=1 00:19:31.385 runtime=1 00:19:31.385 ioengine=libaio 00:19:31.385 direct=1 00:19:31.385 bs=4096 00:19:31.385 iodepth=1 00:19:31.385 norandommap=0 00:19:31.385 numjobs=1 00:19:31.385 00:19:31.385 verify_dump=1 00:19:31.385 verify_backlog=512 00:19:31.385 verify_state_save=0 00:19:31.385 do_verify=1 00:19:31.385 verify=crc32c-intel 00:19:31.385 [job0] 00:19:31.385 filename=/dev/nvme0n1 00:19:31.385 Could not set queue depth (nvme0n1) 00:19:31.643 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:31.643 fio-3.35 00:19:31.643 Starting 1 thread 00:19:33.019 00:19:33.019 job0: (groupid=0, jobs=1): err= 0: pid=1394204: Sat Jul 13 00:44:44 2024 00:19:33.019 read: IOPS=22, BW=89.8KiB/s (91.9kB/s)(92.0KiB/1025msec) 00:19:33.019 slat (nsec): min=9177, max=23452, avg=21888.57, stdev=2841.12 00:19:33.019 clat (usec): min=40784, max=42044, avg=41071.99, stdev=324.15 00:19:33.019 lat (usec): min=40808, max=42065, avg=41093.88, stdev=323.35 00:19:33.019 clat percentiles (usec): 00:19:33.019 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:33.019 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:33.019 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:19:33.019 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:33.019 | 99.99th=[42206] 00:19:33.019 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:19:33.019 slat (nsec): min=9710, max=37756, avg=10743.84, stdev=1645.46 00:19:33.019 clat (usec): min=110, max=279, avg=140.96, stdev=18.08 00:19:33.019 lat (usec): min=121, max=316, avg=151.71, stdev=18.62 00:19:33.019 clat percentiles (usec): 00:19:33.019 | 1.00th=[ 114], 5.00th=[ 116], 10.00th=[ 117], 20.00th=[ 119], 00:19:33.019 | 30.00th=[ 130], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 149], 00:19:33.019 | 70.00th=[ 151], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 163], 00:19:33.019 | 99.00th=[ 169], 99.50th=[ 223], 99.90th=[ 281], 99.95th=[ 281], 00:19:33.019 | 99.99th=[ 281] 00:19:33.019 bw ( KiB/s): min= 4087, max= 4087, per=100.00%, avg=4087.00, stdev= 0.00, samples=1 00:19:33.019 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:19:33.019 lat (usec) : 250=95.51%, 500=0.19% 00:19:33.019 lat (msec) : 50=4.30% 00:19:33.019 cpu : usr=0.59%, sys=0.20%, ctx=535, majf=0, minf=2 00:19:33.019 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:33.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.019 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.019 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:33.019 00:19:33.019 Run status group 0 (all jobs): 00:19:33.019 READ: bw=89.8KiB/s (91.9kB/s), 89.8KiB/s-89.8KiB/s (91.9kB/s-91.9kB/s), io=92.0KiB (94.2kB), run=1025-1025msec 00:19:33.019 WRITE: bw=1998KiB/s (2046kB/s), 1998KiB/s-1998KiB/s (2046kB/s-2046kB/s), io=2048KiB (2097kB), run=1025-1025msec 00:19:33.019 00:19:33.019 Disk stats (read/write): 00:19:33.019 nvme0n1: ios=69/512, merge=0/0, ticks=1011/65, in_queue=1076, util=95.49% 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:33.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:33.019 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:33.019 rmmod nvme_tcp 00:19:33.019 rmmod nvme_fabrics 00:19:33.278 rmmod nvme_keyring 00:19:33.279 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:33.279 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:33.279 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:33.279 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1393329 ']' 00:19:33.279 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1393329 00:19:33.279 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1393329 ']' 00:19:33.279 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1393329 00:19:33.279 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:19:33.279 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:33.279 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1393329 00:19:33.279 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:33.279 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:33.279 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1393329' 00:19:33.279 killing process with pid 1393329 00:19:33.279 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1393329 00:19:33.279 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1393329 00:19:33.538 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:33.538 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:33.538 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:33.538 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:33.538 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:33.538 00:44:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.538 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.538 00:44:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.444 00:44:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:35.444 00:19:35.444 real 0m14.643s 00:19:35.444 user 0m32.819s 00:19:35.444 sys 0m5.109s 00:19:35.444 00:44:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:35.444 00:44:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:35.444 ************************************ 00:19:35.444 END TEST nvmf_nmic 00:19:35.444 ************************************ 00:19:35.444 00:44:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:35.444 00:44:46 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:35.444 00:44:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:35.444 00:44:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.444 00:44:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.444 ************************************ 00:19:35.444 START TEST nvmf_fio_target 00:19:35.444 ************************************ 00:19:35.444 00:44:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:35.703 * Looking for test storage... 00:19:35.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:35.704 00:44:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:40.982 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:40.982 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.982 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:41.242 Found net devices under 0000:86:00.0: cvl_0_0 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:41.242 Found net devices under 0000:86:00.1: cvl_0_1 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:41.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:19:41.242 00:19:41.242 --- 10.0.0.2 ping statistics --- 00:19:41.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.242 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:19:41.242 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:41.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:19:41.242 00:19:41.242 --- 10.0.0.1 ping statistics --- 00:19:41.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.243 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:19:41.243 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.243 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:41.243 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:41.243 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.243 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:41.243 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:41.243 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.243 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:41.243 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:41.502 00:44:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:41.502 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:41.502 00:44:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:41.502 00:44:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.502 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1397958 00:19:41.502 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1397958 00:19:41.502 00:44:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:41.502 00:44:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1397958 ']' 00:19:41.502 00:44:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.502 00:44:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.502 00:44:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.502 00:44:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.502 00:44:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.502 [2024-07-13 00:44:52.886278] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:41.502 [2024-07-13 00:44:52.886324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.502 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.502 [2024-07-13 00:44:52.960181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:41.502 [2024-07-13 00:44:53.001912] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.502 [2024-07-13 00:44:53.001952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.502 [2024-07-13 00:44:53.001958] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.502 [2024-07-13 00:44:53.001964] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.502 [2024-07-13 00:44:53.001969] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.502 [2024-07-13 00:44:53.002086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.502 [2024-07-13 00:44:53.002208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.502 [2024-07-13 00:44:53.002318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.502 [2024-07-13 00:44:53.002319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:42.438 00:44:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:42.438 00:44:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:19:42.438 00:44:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:42.438 00:44:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:42.438 00:44:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.438 00:44:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.438 00:44:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:42.438 [2024-07-13 00:44:53.891880] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.438 00:44:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:42.697 00:44:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:42.697 00:44:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:42.955 00:44:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:42.955 00:44:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:42.955 00:44:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:42.955 00:44:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:43.213 00:44:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:43.213 00:44:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:43.471 00:44:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:43.729 00:44:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:43.729 00:44:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:43.729 00:44:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:43.988 00:44:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:43.988 00:44:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:43.988 00:44:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:44.247 00:44:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:44.505 00:44:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:44.505 00:44:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:44.505 00:44:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:44.505 00:44:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:44.763 00:44:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:45.021 [2024-07-13 00:44:56.361718] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.021 00:44:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:45.021 00:44:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:45.279 00:44:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:46.721 00:44:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:46.721 00:44:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:19:46.721 00:44:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:46.721 00:44:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:19:46.721 00:44:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:19:46.721 00:44:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:19:48.627 00:44:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:48.627 00:44:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:48.627 00:44:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:48.627 00:44:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:19:48.627 00:44:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:48.627 00:44:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:19:48.627 00:44:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:48.627 [global] 00:19:48.627 thread=1 00:19:48.627 invalidate=1 00:19:48.627 rw=write 00:19:48.627 time_based=1 00:19:48.627 runtime=1 00:19:48.627 ioengine=libaio 00:19:48.627 direct=1 00:19:48.627 bs=4096 00:19:48.627 iodepth=1 00:19:48.627 norandommap=0 00:19:48.627 numjobs=1 00:19:48.627 00:19:48.627 verify_dump=1 00:19:48.627 verify_backlog=512 00:19:48.627 verify_state_save=0 00:19:48.627 do_verify=1 00:19:48.627 verify=crc32c-intel 00:19:48.627 [job0] 00:19:48.627 filename=/dev/nvme0n1 00:19:48.627 [job1] 00:19:48.627 filename=/dev/nvme0n2 00:19:48.627 [job2] 00:19:48.627 filename=/dev/nvme0n3 00:19:48.627 [job3] 00:19:48.627 filename=/dev/nvme0n4 00:19:48.627 Could not set queue depth (nvme0n1) 00:19:48.627 Could not set queue depth (nvme0n2) 00:19:48.627 Could not set queue depth (nvme0n3) 00:19:48.627 Could not set queue depth (nvme0n4) 00:19:48.886 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:48.886 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:48.886 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:48.886 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:48.886 fio-3.35 00:19:48.886 Starting 4 threads 00:19:50.290 00:19:50.290 job0: (groupid=0, jobs=1): err= 0: pid=1399328: Sat Jul 13 00:45:01 2024 00:19:50.290 read: IOPS=657, BW=2631KiB/s (2694kB/s)(2652KiB/1008msec) 00:19:50.290 slat (nsec): min=6619, max=26680, avg=7794.25, stdev=2273.78 00:19:50.290 clat (usec): min=201, max=42147, avg=1230.93, stdev=6245.32 00:19:50.290 lat (usec): min=210, max=42157, avg=1238.72, stdev=6246.16 00:19:50.290 clat percentiles (usec): 00:19:50.290 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:19:50.290 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:19:50.290 | 70.00th=[ 255], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:19:50.290 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:50.290 | 99.99th=[42206] 00:19:50.290 write: IOPS=1015, BW=4063KiB/s (4161kB/s)(4096KiB/1008msec); 0 zone resets 00:19:50.290 slat (nsec): min=9396, max=38233, avg=10549.06, stdev=1899.29 00:19:50.290 clat (usec): min=120, max=692, avg=167.67, stdev=33.96 00:19:50.290 lat (usec): min=131, max=702, avg=178.22, stdev=34.40 00:19:50.290 clat percentiles (usec): 00:19:50.290 | 1.00th=[ 126], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 147], 00:19:50.290 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:19:50.290 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:19:50.290 | 99.00th=[ 223], 99.50th=[ 302], 99.90th=[ 644], 99.95th=[ 693], 00:19:50.290 | 99.99th=[ 693] 00:19:50.290 bw ( KiB/s): min= 2832, max= 5360, per=22.53%, avg=4096.00, stdev=1787.57, samples=2 00:19:50.290 iops : min= 708, max= 1340, avg=1024.00, stdev=446.89, samples=2 00:19:50.290 lat (usec) : 250=83.17%, 500=15.71%, 750=0.12% 00:19:50.290 lat (msec) : 2=0.06%, 50=0.95% 00:19:50.290 cpu : usr=0.89%, sys=1.49%, ctx=1690, majf=0, minf=2 00:19:50.290 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.290 issued rwts: total=663,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.290 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.290 job1: (groupid=0, jobs=1): err= 0: pid=1399329: Sat Jul 13 00:45:01 2024 00:19:50.290 read: IOPS=2049, BW=8200KiB/s (8397kB/s)(8208KiB/1001msec) 00:19:50.290 slat (nsec): min=6760, max=41598, avg=7958.58, stdev=1833.82 00:19:50.290 clat (usec): min=168, max=427, avg=237.68, stdev=25.20 00:19:50.290 lat (usec): min=175, max=452, avg=245.64, stdev=25.33 00:19:50.290 clat percentiles (usec): 00:19:50.290 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 215], 00:19:50.290 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 245], 00:19:50.290 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 277], 00:19:50.290 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 334], 99.95th=[ 351], 00:19:50.290 | 99.99th=[ 429] 00:19:50.290 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:19:50.290 slat (nsec): min=9978, max=46794, avg=11492.02, stdev=2117.16 00:19:50.290 clat (usec): min=120, max=1370, avg=176.79, stdev=56.00 00:19:50.290 lat (usec): min=131, max=1383, avg=188.28, stdev=56.11 00:19:50.290 clat percentiles (usec): 00:19:50.290 | 1.00th=[ 127], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:19:50.290 | 30.00th=[ 147], 40.00th=[ 155], 50.00th=[ 165], 60.00th=[ 176], 00:19:50.290 | 70.00th=[ 188], 80.00th=[ 200], 90.00th=[ 245], 95.00th=[ 255], 00:19:50.290 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 1336], 99.95th=[ 1336], 00:19:50.290 | 99.99th=[ 1369] 00:19:50.290 bw ( KiB/s): min=10768, max=10768, per=59.24%, avg=10768.00, stdev= 0.00, samples=1 00:19:50.290 iops : min= 2692, max= 2692, avg=2692.00, stdev= 0.00, samples=1 00:19:50.290 lat (usec) : 250=81.35%, 500=18.56%, 750=0.02% 00:19:50.290 lat (msec) : 2=0.07% 00:19:50.290 cpu : usr=3.40%, sys=7.80%, ctx=4612, majf=0, minf=1 00:19:50.290 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.290 issued rwts: total=2052,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.290 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.290 job2: (groupid=0, jobs=1): err= 0: pid=1399336: Sat Jul 13 00:45:01 2024 00:19:50.290 read: IOPS=418, BW=1674KiB/s (1715kB/s)(1676KiB/1001msec) 00:19:50.290 slat (nsec): min=7867, max=30260, avg=9804.70, stdev=3248.20 00:19:50.290 clat (usec): min=183, max=44942, avg=2135.30, stdev=8573.74 00:19:50.290 lat (usec): min=192, max=44955, avg=2145.11, stdev=8576.03 00:19:50.290 clat percentiles (usec): 00:19:50.290 | 1.00th=[ 198], 5.00th=[ 215], 10.00th=[ 235], 20.00th=[ 253], 00:19:50.290 | 30.00th=[ 258], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:19:50.290 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 1336], 00:19:50.290 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:19:50.290 | 99.99th=[44827] 00:19:50.290 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:50.290 slat (nsec): min=11365, max=43485, avg=13397.35, stdev=2271.64 00:19:50.290 clat (usec): min=138, max=261, avg=178.58, stdev=16.71 00:19:50.290 lat (usec): min=152, max=279, avg=191.97, stdev=17.10 00:19:50.290 clat percentiles (usec): 00:19:50.290 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 165], 00:19:50.290 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:19:50.290 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:19:50.290 | 99.00th=[ 219], 99.50th=[ 237], 99.90th=[ 262], 99.95th=[ 262], 00:19:50.290 | 99.99th=[ 262] 00:19:50.290 bw ( KiB/s): min= 4096, max= 4096, per=22.53%, avg=4096.00, stdev= 0.00, samples=1 00:19:50.290 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:50.290 lat (usec) : 250=62.51%, 500=35.23% 00:19:50.290 lat (msec) : 2=0.11%, 4=0.11%, 50=2.04% 00:19:50.290 cpu : usr=0.50%, sys=2.00%, ctx=932, majf=0, minf=1 00:19:50.290 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.290 issued rwts: total=419,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.290 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.290 job3: (groupid=0, jobs=1): err= 0: pid=1399337: Sat Jul 13 00:45:01 2024 00:19:50.290 read: IOPS=383, BW=1535KiB/s (1571kB/s)(1556KiB/1014msec) 00:19:50.290 slat (nsec): min=4465, max=33339, avg=7958.78, stdev=3315.91 00:19:50.290 clat (usec): min=195, max=42173, avg=2349.86, stdev=9054.47 00:19:50.290 lat (usec): min=200, max=42180, avg=2357.82, stdev=9056.39 00:19:50.290 clat percentiles (usec): 00:19:50.290 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 223], 00:19:50.290 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 255], 00:19:50.290 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[40633], 00:19:50.290 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:50.290 | 99.99th=[42206] 00:19:50.290 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:19:50.290 slat (nsec): min=9516, max=37816, avg=12699.64, stdev=3046.89 00:19:50.290 clat (usec): min=148, max=288, avg=170.74, stdev=12.74 00:19:50.290 lat (usec): min=158, max=325, avg=183.44, stdev=13.95 00:19:50.290 clat percentiles (usec): 00:19:50.290 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 161], 00:19:50.290 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 172], 00:19:50.290 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 190], 00:19:50.290 | 99.00th=[ 202], 99.50th=[ 204], 99.90th=[ 289], 99.95th=[ 289], 00:19:50.290 | 99.99th=[ 289] 00:19:50.290 bw ( KiB/s): min= 4096, max= 4096, per=22.53%, avg=4096.00, stdev= 0.00, samples=1 00:19:50.290 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:50.290 lat (usec) : 250=79.02%, 500=18.76% 00:19:50.290 lat (msec) : 50=2.22% 00:19:50.290 cpu : usr=0.59%, sys=0.79%, ctx=902, majf=0, minf=1 00:19:50.290 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.290 issued rwts: total=389,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.290 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.290 00:19:50.290 Run status group 0 (all jobs): 00:19:50.290 READ: bw=13.6MiB/s (14.2MB/s), 1535KiB/s-8200KiB/s (1571kB/s-8397kB/s), io=13.8MiB (14.4MB), run=1001-1014msec 00:19:50.290 WRITE: bw=17.8MiB/s (18.6MB/s), 2020KiB/s-9.99MiB/s (2068kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1014msec 00:19:50.290 00:19:50.290 Disk stats (read/write): 00:19:50.290 nvme0n1: ios=591/1024, merge=0/0, ticks=767/167, in_queue=934, util=86.07% 00:19:50.290 nvme0n2: ios=1967/2048, merge=0/0, ticks=497/325, in_queue=822, util=91.07% 00:19:50.290 nvme0n3: ios=472/512, merge=0/0, ticks=1371/83, in_queue=1454, util=93.65% 00:19:50.290 nvme0n4: ios=407/512, merge=0/0, ticks=1653/83, in_queue=1736, util=94.23% 00:19:50.290 00:45:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:50.290 [global] 00:19:50.290 thread=1 00:19:50.290 invalidate=1 00:19:50.290 rw=randwrite 00:19:50.290 time_based=1 00:19:50.290 runtime=1 00:19:50.290 ioengine=libaio 00:19:50.290 direct=1 00:19:50.290 bs=4096 00:19:50.290 iodepth=1 00:19:50.290 norandommap=0 00:19:50.290 numjobs=1 00:19:50.290 00:19:50.290 verify_dump=1 00:19:50.290 verify_backlog=512 00:19:50.290 verify_state_save=0 00:19:50.290 do_verify=1 00:19:50.290 verify=crc32c-intel 00:19:50.290 [job0] 00:19:50.290 filename=/dev/nvme0n1 00:19:50.290 [job1] 00:19:50.290 filename=/dev/nvme0n2 00:19:50.290 [job2] 00:19:50.290 filename=/dev/nvme0n3 00:19:50.290 [job3] 00:19:50.290 filename=/dev/nvme0n4 00:19:50.290 Could not set queue depth (nvme0n1) 00:19:50.290 Could not set queue depth (nvme0n2) 00:19:50.290 Could not set queue depth (nvme0n3) 00:19:50.290 Could not set queue depth (nvme0n4) 00:19:50.548 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:50.548 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:50.548 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:50.548 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:50.548 fio-3.35 00:19:50.548 Starting 4 threads 00:19:51.925 00:19:51.925 job0: (groupid=0, jobs=1): err= 0: pid=1399788: Sat Jul 13 00:45:03 2024 00:19:51.925 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:19:51.925 slat (nsec): min=8493, max=23093, avg=16482.50, stdev=6631.20 00:19:51.925 clat (usec): min=40857, max=41934, avg=41068.20, stdev=287.50 00:19:51.925 lat (usec): min=40880, max=41944, avg=41084.68, stdev=284.64 00:19:51.925 clat percentiles (usec): 00:19:51.925 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:51.925 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:51.925 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:19:51.925 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:51.925 | 99.99th=[41681] 00:19:51.925 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:19:51.925 slat (nsec): min=9244, max=43262, avg=10691.91, stdev=2547.57 00:19:51.925 clat (usec): min=134, max=1322, avg=190.39, stdev=59.17 00:19:51.925 lat (usec): min=145, max=1333, avg=201.08, stdev=59.44 00:19:51.925 clat percentiles (usec): 00:19:51.925 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:19:51.925 | 30.00th=[ 167], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 194], 00:19:51.925 | 70.00th=[ 206], 80.00th=[ 217], 90.00th=[ 233], 95.00th=[ 243], 00:19:51.925 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 1319], 99.95th=[ 1319], 00:19:51.925 | 99.99th=[ 1319] 00:19:51.925 bw ( KiB/s): min= 4096, max= 4096, per=16.06%, avg=4096.00, stdev= 0.00, samples=1 00:19:51.925 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:51.925 lat (usec) : 250=93.26%, 500=2.43% 00:19:51.925 lat (msec) : 2=0.19%, 50=4.12% 00:19:51.925 cpu : usr=0.30%, sys=0.50%, ctx=540, majf=0, minf=2 00:19:51.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.925 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.925 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:51.925 job1: (groupid=0, jobs=1): err= 0: pid=1399789: Sat Jul 13 00:45:03 2024 00:19:51.925 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:19:51.925 slat (nsec): min=6522, max=26328, avg=7661.88, stdev=1410.39 00:19:51.925 clat (usec): min=184, max=40884, avg=287.04, stdev=1522.20 00:19:51.925 lat (usec): min=192, max=40908, avg=294.70, stdev=1522.63 00:19:51.925 clat percentiles (usec): 00:19:51.925 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:19:51.925 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:19:51.925 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 258], 00:19:51.925 | 99.00th=[ 347], 99.50th=[ 420], 99.90th=[38536], 99.95th=[40633], 00:19:51.925 | 99.99th=[40633] 00:19:51.925 write: IOPS=2050, BW=8204KiB/s (8401kB/s)(8212KiB/1001msec); 0 zone resets 00:19:51.925 slat (nsec): min=9195, max=46140, avg=11603.55, stdev=3011.81 00:19:51.925 clat (usec): min=120, max=345, avg=175.64, stdev=35.98 00:19:51.925 lat (usec): min=131, max=355, avg=187.24, stdev=37.10 00:19:51.925 clat percentiles (usec): 00:19:51.925 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:19:51.925 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 169], 00:19:51.925 | 70.00th=[ 182], 80.00th=[ 206], 90.00th=[ 239], 95.00th=[ 247], 00:19:51.925 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 314], 99.95th=[ 338], 00:19:51.925 | 99.99th=[ 347] 00:19:51.925 bw ( KiB/s): min= 8192, max= 8192, per=32.12%, avg=8192.00, stdev= 0.00, samples=1 00:19:51.925 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:51.925 lat (usec) : 250=93.90%, 500=6.00%, 750=0.02% 00:19:51.925 lat (msec) : 50=0.07% 00:19:51.925 cpu : usr=1.70%, sys=6.00%, ctx=4103, majf=0, minf=1 00:19:51.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.925 issued rwts: total=2048,2053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.925 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:51.925 job2: (groupid=0, jobs=1): err= 0: pid=1399790: Sat Jul 13 00:45:03 2024 00:19:51.925 read: IOPS=1041, BW=4168KiB/s (4268kB/s)(4172KiB/1001msec) 00:19:51.925 slat (nsec): min=7518, max=36493, avg=8604.57, stdev=2021.64 00:19:51.925 clat (usec): min=207, max=41086, avg=669.33, stdev=4059.92 00:19:51.925 lat (usec): min=215, max=41110, avg=677.93, stdev=4061.15 00:19:51.925 clat percentiles (usec): 00:19:51.925 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:19:51.925 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:19:51.925 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 293], 95.00th=[ 306], 00:19:51.925 | 99.00th=[28443], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:51.925 | 99.99th=[41157] 00:19:51.925 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:51.925 slat (nsec): min=10518, max=35589, avg=11872.93, stdev=1572.57 00:19:51.925 clat (usec): min=132, max=769, avg=173.31, stdev=28.06 00:19:51.925 lat (usec): min=144, max=780, avg=185.18, stdev=28.23 00:19:51.925 clat percentiles (usec): 00:19:51.925 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 159], 00:19:51.925 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:19:51.925 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 208], 00:19:51.925 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 676], 99.95th=[ 766], 00:19:51.925 | 99.99th=[ 766] 00:19:51.925 bw ( KiB/s): min= 4096, max= 4096, per=16.06%, avg=4096.00, stdev= 0.00, samples=1 00:19:51.925 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:51.925 lat (usec) : 250=83.56%, 500=15.94%, 750=0.04%, 1000=0.04% 00:19:51.925 lat (msec) : 50=0.43% 00:19:51.925 cpu : usr=1.50%, sys=4.80%, ctx=2580, majf=0, minf=1 00:19:51.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.926 issued rwts: total=1043,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.926 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:51.926 job3: (groupid=0, jobs=1): err= 0: pid=1399792: Sat Jul 13 00:45:03 2024 00:19:51.926 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:19:51.926 slat (nsec): min=6944, max=25021, avg=8108.69, stdev=1185.42 00:19:51.926 clat (usec): min=186, max=960, avg=247.80, stdev=34.15 00:19:51.926 lat (usec): min=194, max=967, avg=255.91, stdev=34.22 00:19:51.926 clat percentiles (usec): 00:19:51.926 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:19:51.926 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:19:51.926 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:19:51.926 | 99.00th=[ 429], 99.50th=[ 449], 99.90th=[ 490], 99.95th=[ 515], 00:19:51.926 | 99.99th=[ 963] 00:19:51.926 write: IOPS=2329, BW=9319KiB/s (9542kB/s)(9328KiB/1001msec); 0 zone resets 00:19:51.926 slat (nsec): min=10140, max=45512, avg=11473.35, stdev=1957.99 00:19:51.926 clat (usec): min=134, max=2276, avg=186.92, stdev=55.12 00:19:51.926 lat (usec): min=145, max=2290, avg=198.39, stdev=55.25 00:19:51.926 clat percentiles (usec): 00:19:51.926 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:19:51.926 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:19:51.926 | 70.00th=[ 190], 80.00th=[ 204], 90.00th=[ 241], 95.00th=[ 269], 00:19:51.926 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 322], 99.95th=[ 363], 00:19:51.926 | 99.99th=[ 2278] 00:19:51.926 bw ( KiB/s): min= 8672, max= 8672, per=34.00%, avg=8672.00, stdev= 0.00, samples=1 00:19:51.926 iops : min= 2168, max= 2168, avg=2168.00, stdev= 0.00, samples=1 00:19:51.926 lat (usec) : 250=79.77%, 500=20.16%, 750=0.02%, 1000=0.02% 00:19:51.926 lat (msec) : 4=0.02% 00:19:51.926 cpu : usr=4.30%, sys=6.30%, ctx=4380, majf=0, minf=1 00:19:51.926 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.926 issued rwts: total=2048,2332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.926 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:51.926 00:19:51.926 Run status group 0 (all jobs): 00:19:51.926 READ: bw=20.0MiB/s (20.9MB/s), 87.2KiB/s-8184KiB/s (89.3kB/s-8380kB/s), io=20.2MiB (21.1MB), run=1001-1009msec 00:19:51.926 WRITE: bw=24.9MiB/s (26.1MB/s), 2030KiB/s-9319KiB/s (2078kB/s-9542kB/s), io=25.1MiB (26.3MB), run=1001-1009msec 00:19:51.926 00:19:51.926 Disk stats (read/write): 00:19:51.926 nvme0n1: ios=45/512, merge=0/0, ticks=1406/99, in_queue=1505, util=99.80% 00:19:51.926 nvme0n2: ios=1586/1926, merge=0/0, ticks=1138/323, in_queue=1461, util=98.48% 00:19:51.926 nvme0n3: ios=926/1024, merge=0/0, ticks=1043/169, in_queue=1212, util=98.44% 00:19:51.926 nvme0n4: ios=1745/2048, merge=0/0, ticks=757/351, in_queue=1108, util=91.11% 00:19:51.926 00:45:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:51.926 [global] 00:19:51.926 thread=1 00:19:51.926 invalidate=1 00:19:51.926 rw=write 00:19:51.926 time_based=1 00:19:51.926 runtime=1 00:19:51.926 ioengine=libaio 00:19:51.926 direct=1 00:19:51.926 bs=4096 00:19:51.926 iodepth=128 00:19:51.926 norandommap=0 00:19:51.926 numjobs=1 00:19:51.926 00:19:51.926 verify_dump=1 00:19:51.926 verify_backlog=512 00:19:51.926 verify_state_save=0 00:19:51.926 do_verify=1 00:19:51.926 verify=crc32c-intel 00:19:51.926 [job0] 00:19:51.926 filename=/dev/nvme0n1 00:19:51.926 [job1] 00:19:51.926 filename=/dev/nvme0n2 00:19:51.926 [job2] 00:19:51.926 filename=/dev/nvme0n3 00:19:51.926 [job3] 00:19:51.926 filename=/dev/nvme0n4 00:19:51.926 Could not set queue depth (nvme0n1) 00:19:51.926 Could not set queue depth (nvme0n2) 00:19:51.926 Could not set queue depth (nvme0n3) 00:19:51.926 Could not set queue depth (nvme0n4) 00:19:51.926 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:51.926 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:51.926 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:51.926 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:51.926 fio-3.35 00:19:51.926 Starting 4 threads 00:19:53.303 00:19:53.303 job0: (groupid=0, jobs=1): err= 0: pid=1400194: Sat Jul 13 00:45:04 2024 00:19:53.303 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:19:53.303 slat (nsec): min=1521, max=12209k, avg=158775.47, stdev=864471.30 00:19:53.303 clat (usec): min=6090, max=47320, avg=19824.74, stdev=8658.26 00:19:53.303 lat (usec): min=6098, max=50968, avg=19983.52, stdev=8711.99 00:19:53.303 clat percentiles (usec): 00:19:53.303 | 1.00th=[ 8094], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[11731], 00:19:53.303 | 30.00th=[13829], 40.00th=[15401], 50.00th=[18220], 60.00th=[20579], 00:19:53.303 | 70.00th=[22938], 80.00th=[26084], 90.00th=[32375], 95.00th=[36439], 00:19:53.303 | 99.00th=[45876], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:19:53.303 | 99.99th=[47449] 00:19:53.303 write: IOPS=3934, BW=15.4MiB/s (16.1MB/s)(15.4MiB/1002msec); 0 zone resets 00:19:53.303 slat (usec): min=2, max=7526, avg=101.52, stdev=650.92 00:19:53.303 clat (usec): min=242, max=45696, avg=14113.32, stdev=5788.63 00:19:53.303 lat (usec): min=1148, max=46271, avg=14214.84, stdev=5822.65 00:19:53.303 clat percentiles (usec): 00:19:53.303 | 1.00th=[ 5014], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[ 9765], 00:19:53.303 | 30.00th=[11338], 40.00th=[12256], 50.00th=[12911], 60.00th=[13435], 00:19:53.303 | 70.00th=[15795], 80.00th=[16581], 90.00th=[19530], 95.00th=[25035], 00:19:53.303 | 99.00th=[35914], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:19:53.303 | 99.99th=[45876] 00:19:53.303 bw ( KiB/s): min=16384, max=16384, per=22.12%, avg=16384.00, stdev= 0.00, samples=1 00:19:53.303 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:19:53.303 lat (usec) : 250=0.01% 00:19:53.303 lat (msec) : 2=0.03%, 4=0.43%, 10=16.82%, 20=58.36%, 50=24.36% 00:19:53.303 cpu : usr=2.70%, sys=4.50%, ctx=258, majf=0, minf=1 00:19:53.303 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:53.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.303 issued rwts: total=3584,3942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.303 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.303 job1: (groupid=0, jobs=1): err= 0: pid=1400203: Sat Jul 13 00:45:04 2024 00:19:53.303 read: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec) 00:19:53.303 slat (nsec): min=1022, max=9230.7k, avg=77952.62, stdev=586002.49 00:19:53.303 clat (usec): min=1535, max=18924, avg=9844.19, stdev=2329.32 00:19:53.303 lat (usec): min=1539, max=18955, avg=9922.14, stdev=2376.99 00:19:53.303 clat percentiles (usec): 00:19:53.303 | 1.00th=[ 3490], 5.00th=[ 6456], 10.00th=[ 7504], 20.00th=[ 8455], 00:19:53.303 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9896], 00:19:53.303 | 70.00th=[10159], 80.00th=[10945], 90.00th=[12911], 95.00th=[14484], 00:19:53.303 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18220], 99.95th=[18220], 00:19:53.304 | 99.99th=[19006] 00:19:53.304 write: IOPS=6930, BW=27.1MiB/s (28.4MB/s)(27.3MiB/1008msec); 0 zone resets 00:19:53.304 slat (nsec): min=1913, max=8737.2k, avg=55889.08, stdev=362867.13 00:19:53.304 clat (usec): min=554, max=31512, avg=8864.73, stdev=3941.72 00:19:53.304 lat (usec): min=577, max=32313, avg=8920.62, stdev=3975.96 00:19:53.304 clat percentiles (usec): 00:19:53.304 | 1.00th=[ 2180], 5.00th=[ 3687], 10.00th=[ 5080], 20.00th=[ 6718], 00:19:53.304 | 30.00th=[ 7242], 40.00th=[ 8291], 50.00th=[ 8979], 60.00th=[ 9372], 00:19:53.304 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[15270], 00:19:53.304 | 99.00th=[28181], 99.50th=[29754], 99.90th=[31589], 99.95th=[31589], 00:19:53.304 | 99.99th=[31589] 00:19:53.304 bw ( KiB/s): min=24104, max=30768, per=37.04%, avg=27436.00, stdev=4712.16, samples=2 00:19:53.304 iops : min= 6026, max= 7692, avg=6859.00, stdev=1178.04, samples=2 00:19:53.304 lat (usec) : 750=0.05%, 1000=0.17% 00:19:53.304 lat (msec) : 2=0.36%, 4=3.09%, 10=71.97%, 20=22.72%, 50=1.64% 00:19:53.304 cpu : usr=4.77%, sys=7.65%, ctx=651, majf=0, minf=1 00:19:53.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:53.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.304 issued rwts: total=6656,6986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.304 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.304 job2: (groupid=0, jobs=1): err= 0: pid=1400227: Sat Jul 13 00:45:04 2024 00:19:53.304 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:19:53.304 slat (nsec): min=1074, max=17130k, avg=118767.80, stdev=757619.36 00:19:53.304 clat (usec): min=8074, max=45862, avg=16188.42, stdev=6176.44 00:19:53.304 lat (usec): min=8082, max=45916, avg=16307.19, stdev=6229.69 00:19:53.304 clat percentiles (usec): 00:19:53.304 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10683], 20.00th=[11600], 00:19:53.304 | 30.00th=[11863], 40.00th=[12518], 50.00th=[14353], 60.00th=[16450], 00:19:53.304 | 70.00th=[18482], 80.00th=[19268], 90.00th=[25297], 95.00th=[28967], 00:19:53.304 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:19:53.304 | 99.99th=[45876] 00:19:53.304 write: IOPS=4254, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1005msec); 0 zone resets 00:19:53.304 slat (nsec): min=1882, max=14917k, avg=114289.25, stdev=672776.29 00:19:53.304 clat (usec): min=2810, max=33118, avg=14298.37, stdev=5844.45 00:19:53.304 lat (usec): min=4190, max=36901, avg=14412.66, stdev=5903.83 00:19:53.304 clat percentiles (usec): 00:19:53.304 | 1.00th=[ 5473], 5.00th=[ 7767], 10.00th=[ 9765], 20.00th=[11076], 00:19:53.304 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12256], 60.00th=[13042], 00:19:53.304 | 70.00th=[14615], 80.00th=[17171], 90.00th=[23725], 95.00th=[28181], 00:19:53.304 | 99.00th=[32113], 99.50th=[32375], 99.90th=[33162], 99.95th=[33162], 00:19:53.304 | 99.99th=[33162] 00:19:53.304 bw ( KiB/s): min=16384, max=16808, per=22.41%, avg=16596.00, stdev=299.81, samples=2 00:19:53.304 iops : min= 4096, max= 4202, avg=4149.00, stdev=74.95, samples=2 00:19:53.304 lat (msec) : 4=0.01%, 10=7.67%, 20=76.74%, 50=15.58% 00:19:53.304 cpu : usr=3.49%, sys=3.69%, ctx=368, majf=0, minf=1 00:19:53.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:53.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.304 issued rwts: total=4096,4276,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.304 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.304 job3: (groupid=0, jobs=1): err= 0: pid=1400235: Sat Jul 13 00:45:04 2024 00:19:53.304 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:19:53.304 slat (nsec): min=1081, max=21596k, avg=147889.21, stdev=1026713.57 00:19:53.304 clat (usec): min=5760, max=53358, avg=19289.57, stdev=7820.24 00:19:53.304 lat (usec): min=5766, max=53381, avg=19437.46, stdev=7887.43 00:19:53.304 clat percentiles (usec): 00:19:53.304 | 1.00th=[ 9110], 5.00th=[10552], 10.00th=[11207], 20.00th=[11731], 00:19:53.304 | 30.00th=[12518], 40.00th=[14222], 50.00th=[17433], 60.00th=[20317], 00:19:53.304 | 70.00th=[25035], 80.00th=[27395], 90.00th=[30278], 95.00th=[33424], 00:19:53.304 | 99.00th=[36963], 99.50th=[36963], 99.90th=[39060], 99.95th=[42730], 00:19:53.304 | 99.99th=[53216] 00:19:53.304 write: IOPS=3453, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1002msec); 0 zone resets 00:19:53.304 slat (nsec): min=1873, max=21244k, avg=152373.98, stdev=918899.34 00:19:53.304 clat (usec): min=768, max=52517, avg=18865.12, stdev=9784.87 00:19:53.304 lat (usec): min=3990, max=52547, avg=19017.50, stdev=9861.13 00:19:53.304 clat percentiles (usec): 00:19:53.304 | 1.00th=[ 6915], 5.00th=[ 9110], 10.00th=[11207], 20.00th=[11600], 00:19:53.304 | 30.00th=[11863], 40.00th=[12256], 50.00th=[14222], 60.00th=[19006], 00:19:53.304 | 70.00th=[22676], 80.00th=[24773], 90.00th=[34866], 95.00th=[41157], 00:19:53.304 | 99.00th=[45351], 99.50th=[45876], 99.90th=[49021], 99.95th=[51119], 00:19:53.304 | 99.99th=[52691] 00:19:53.304 bw ( KiB/s): min=12288, max=12288, per=16.59%, avg=12288.00, stdev= 0.00, samples=1 00:19:53.304 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:19:53.304 lat (usec) : 1000=0.02% 00:19:53.304 lat (msec) : 4=0.03%, 10=4.35%, 20=56.51%, 50=39.05%, 100=0.05% 00:19:53.304 cpu : usr=2.40%, sys=3.40%, ctx=310, majf=0, minf=1 00:19:53.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:53.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:53.304 issued rwts: total=3072,3460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.304 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:53.304 00:19:53.304 Run status group 0 (all jobs): 00:19:53.304 READ: bw=67.5MiB/s (70.7MB/s), 12.0MiB/s-25.8MiB/s (12.6MB/s-27.0MB/s), io=68.0MiB (71.3MB), run=1002-1008msec 00:19:53.304 WRITE: bw=72.3MiB/s (75.8MB/s), 13.5MiB/s-27.1MiB/s (14.1MB/s-28.4MB/s), io=72.9MiB (76.4MB), run=1002-1008msec 00:19:53.304 00:19:53.304 Disk stats (read/write): 00:19:53.304 nvme0n1: ios=3124/3280, merge=0/0, ticks=24422/17608, in_queue=42030, util=99.20% 00:19:53.304 nvme0n2: ios=5683/5999, merge=0/0, ticks=52955/48234, in_queue=101189, util=99.80% 00:19:53.304 nvme0n3: ios=3138/3584, merge=0/0, ticks=21146/20365, in_queue=41511, util=88.31% 00:19:53.304 nvme0n4: ios=2584/2735, merge=0/0, ticks=17017/17331, in_queue=34348, util=100.00% 00:19:53.304 00:45:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:53.304 [global] 00:19:53.304 thread=1 00:19:53.304 invalidate=1 00:19:53.304 rw=randwrite 00:19:53.304 time_based=1 00:19:53.304 runtime=1 00:19:53.304 ioengine=libaio 00:19:53.304 direct=1 00:19:53.304 bs=4096 00:19:53.304 iodepth=128 00:19:53.304 norandommap=0 00:19:53.304 numjobs=1 00:19:53.304 00:19:53.304 verify_dump=1 00:19:53.304 verify_backlog=512 00:19:53.304 verify_state_save=0 00:19:53.304 do_verify=1 00:19:53.304 verify=crc32c-intel 00:19:53.304 [job0] 00:19:53.304 filename=/dev/nvme0n1 00:19:53.304 [job1] 00:19:53.304 filename=/dev/nvme0n2 00:19:53.304 [job2] 00:19:53.304 filename=/dev/nvme0n3 00:19:53.304 [job3] 00:19:53.304 filename=/dev/nvme0n4 00:19:53.304 Could not set queue depth (nvme0n1) 00:19:53.304 Could not set queue depth (nvme0n2) 00:19:53.304 Could not set queue depth (nvme0n3) 00:19:53.304 Could not set queue depth (nvme0n4) 00:19:53.563 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:53.563 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:53.563 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:53.563 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:53.563 fio-3.35 00:19:53.563 Starting 4 threads 00:19:54.963 00:19:54.963 job0: (groupid=0, jobs=1): err= 0: pid=1400646: Sat Jul 13 00:45:06 2024 00:19:54.963 read: IOPS=4092, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1004msec) 00:19:54.963 slat (nsec): min=1244, max=12630k, avg=122846.38, stdev=805026.25 00:19:54.963 clat (usec): min=3321, max=38102, avg=14807.59, stdev=6487.47 00:19:54.963 lat (usec): min=3327, max=38910, avg=14930.44, stdev=6554.27 00:19:54.963 clat percentiles (usec): 00:19:54.963 | 1.00th=[ 4883], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9634], 00:19:54.963 | 30.00th=[10159], 40.00th=[10552], 50.00th=[13304], 60.00th=[15139], 00:19:54.963 | 70.00th=[15401], 80.00th=[20055], 90.00th=[25822], 95.00th=[29230], 00:19:54.963 | 99.00th=[32113], 99.50th=[32375], 99.90th=[38011], 99.95th=[38011], 00:19:54.963 | 99.99th=[38011] 00:19:54.963 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:19:54.963 slat (usec): min=2, max=20682, avg=100.70, stdev=681.52 00:19:54.963 clat (usec): min=2228, max=60050, avg=14406.85, stdev=8304.61 00:19:54.963 lat (usec): min=2239, max=60057, avg=14507.55, stdev=8360.94 00:19:54.964 clat percentiles (usec): 00:19:54.964 | 1.00th=[ 3064], 5.00th=[ 5407], 10.00th=[ 7439], 20.00th=[ 9634], 00:19:54.964 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[13435], 60.00th=[14615], 00:19:54.964 | 70.00th=[16057], 80.00th=[16909], 90.00th=[21890], 95.00th=[27132], 00:19:54.964 | 99.00th=[53740], 99.50th=[58459], 99.90th=[59507], 99.95th=[59507], 00:19:54.964 | 99.99th=[60031] 00:19:54.964 bw ( KiB/s): min=15472, max=20480, per=25.20%, avg=17976.00, stdev=3541.19, samples=2 00:19:54.964 iops : min= 3868, max= 5120, avg=4494.00, stdev=885.30, samples=2 00:19:54.964 lat (msec) : 4=1.48%, 10=28.01%, 20=53.50%, 50=16.11%, 100=0.89% 00:19:54.964 cpu : usr=4.19%, sys=4.89%, ctx=496, majf=0, minf=1 00:19:54.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:54.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:54.964 issued rwts: total=4109,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:54.964 job1: (groupid=0, jobs=1): err= 0: pid=1400662: Sat Jul 13 00:45:06 2024 00:19:54.964 read: IOPS=4515, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1005msec) 00:19:54.964 slat (nsec): min=1344, max=26975k, avg=122504.70, stdev=938436.67 00:19:54.964 clat (usec): min=2704, max=68104, avg=14561.08, stdev=6524.32 00:19:54.964 lat (usec): min=4011, max=68114, avg=14683.59, stdev=6625.18 00:19:54.964 clat percentiles (usec): 00:19:54.964 | 1.00th=[ 6194], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9372], 00:19:54.964 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[13173], 60.00th=[15795], 00:19:54.964 | 70.00th=[17433], 80.00th=[20317], 90.00th=[22152], 95.00th=[24773], 00:19:54.964 | 99.00th=[28443], 99.50th=[37487], 99.90th=[67634], 99.95th=[67634], 00:19:54.964 | 99.99th=[67634] 00:19:54.964 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:19:54.964 slat (usec): min=2, max=13030, avg=84.70, stdev=585.85 00:19:54.964 clat (usec): min=2783, max=74642, avg=13291.52, stdev=9690.96 00:19:54.964 lat (usec): min=2796, max=74649, avg=13376.22, stdev=9735.40 00:19:54.964 clat percentiles (usec): 00:19:54.964 | 1.00th=[ 3687], 5.00th=[ 6390], 10.00th=[ 7046], 20.00th=[ 8455], 00:19:54.964 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[13960], 00:19:54.964 | 70.00th=[14877], 80.00th=[15926], 90.00th=[17695], 95.00th=[24773], 00:19:54.964 | 99.00th=[67634], 99.50th=[69731], 99.90th=[74974], 99.95th=[74974], 00:19:54.964 | 99.99th=[74974] 00:19:54.964 bw ( KiB/s): min=14976, max=21888, per=25.84%, avg=18432.00, stdev=4887.52, samples=2 00:19:54.964 iops : min= 3744, max= 5472, avg=4608.00, stdev=1221.88, samples=2 00:19:54.964 lat (msec) : 4=0.75%, 10=41.79%, 20=43.01%, 50=13.05%, 100=1.39% 00:19:54.964 cpu : usr=3.49%, sys=5.58%, ctx=424, majf=0, minf=1 00:19:54.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:54.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:54.964 issued rwts: total=4538,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:54.964 job2: (groupid=0, jobs=1): err= 0: pid=1400676: Sat Jul 13 00:45:06 2024 00:19:54.964 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:19:54.964 slat (nsec): min=1354, max=8963.3k, avg=130905.15, stdev=764294.58 00:19:54.964 clat (usec): min=7527, max=32655, avg=16998.54, stdev=4654.82 00:19:54.964 lat (usec): min=7531, max=33771, avg=17129.44, stdev=4719.41 00:19:54.964 clat percentiles (usec): 00:19:54.964 | 1.00th=[ 9765], 5.00th=[10814], 10.00th=[11076], 20.00th=[11207], 00:19:54.964 | 30.00th=[14484], 40.00th=[15926], 50.00th=[17171], 60.00th=[18220], 00:19:54.964 | 70.00th=[19530], 80.00th=[20841], 90.00th=[22414], 95.00th=[25822], 00:19:54.964 | 99.00th=[28443], 99.50th=[28967], 99.90th=[28967], 99.95th=[29754], 00:19:54.964 | 99.99th=[32637] 00:19:54.964 write: IOPS=3482, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1006msec); 0 zone resets 00:19:54.964 slat (usec): min=2, max=21241, avg=164.04, stdev=1068.34 00:19:54.964 clat (usec): min=5084, max=56260, avg=21459.25, stdev=8773.72 00:19:54.964 lat (usec): min=8419, max=56292, avg=21623.29, stdev=8858.03 00:19:54.964 clat percentiles (usec): 00:19:54.964 | 1.00th=[11076], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:19:54.964 | 30.00th=[14746], 40.00th=[17695], 50.00th=[18220], 60.00th=[19792], 00:19:54.964 | 70.00th=[25035], 80.00th=[28967], 90.00th=[34866], 95.00th=[39584], 00:19:54.964 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:19:54.964 | 99.99th=[56361] 00:19:54.964 bw ( KiB/s): min=13424, max=13584, per=18.93%, avg=13504.00, stdev=113.14, samples=2 00:19:54.964 iops : min= 3356, max= 3396, avg=3376.00, stdev=28.28, samples=2 00:19:54.964 lat (msec) : 10=0.91%, 20=64.79%, 50=34.28%, 100=0.02% 00:19:54.964 cpu : usr=3.88%, sys=3.88%, ctx=258, majf=0, minf=1 00:19:54.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:54.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:54.964 issued rwts: total=3072,3503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:54.964 job3: (groupid=0, jobs=1): err= 0: pid=1400681: Sat Jul 13 00:45:06 2024 00:19:54.964 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:19:54.964 slat (nsec): min=1279, max=10304k, avg=94149.49, stdev=682209.87 00:19:54.964 clat (usec): min=2332, max=26341, avg=11964.52, stdev=3000.19 00:19:54.964 lat (usec): min=2340, max=26344, avg=12058.67, stdev=3048.65 00:19:54.964 clat percentiles (usec): 00:19:54.964 | 1.00th=[ 5080], 5.00th=[ 7635], 10.00th=[ 8848], 20.00th=[10159], 00:19:54.964 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11469], 60.00th=[11731], 00:19:54.964 | 70.00th=[12649], 80.00th=[13960], 90.00th=[15926], 95.00th=[17957], 00:19:54.964 | 99.00th=[20841], 99.50th=[22938], 99.90th=[25297], 99.95th=[26346], 00:19:54.964 | 99.99th=[26346] 00:19:54.964 write: IOPS=5188, BW=20.3MiB/s (21.3MB/s)(20.4MiB/1006msec); 0 zone resets 00:19:54.964 slat (usec): min=2, max=11118, avg=86.89, stdev=462.94 00:19:54.964 clat (usec): min=487, max=54109, avg=12707.52, stdev=7822.92 00:19:54.964 lat (usec): min=499, max=54116, avg=12794.41, stdev=7872.23 00:19:54.964 clat percentiles (usec): 00:19:54.964 | 1.00th=[ 2245], 5.00th=[ 4359], 10.00th=[ 6652], 20.00th=[ 9241], 00:19:54.964 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:19:54.964 | 70.00th=[11731], 80.00th=[11994], 90.00th=[18220], 95.00th=[35390], 00:19:54.964 | 99.00th=[42730], 99.50th=[44303], 99.90th=[49021], 99.95th=[49021], 00:19:54.964 | 99.99th=[54264] 00:19:54.964 bw ( KiB/s): min=20464, max=20496, per=28.71%, avg=20480.00, stdev=22.63, samples=2 00:19:54.964 iops : min= 5116, max= 5124, avg=5120.00, stdev= 5.66, samples=2 00:19:54.964 lat (usec) : 500=0.03%, 750=0.01%, 1000=0.04% 00:19:54.964 lat (msec) : 2=0.26%, 4=1.85%, 10=17.72%, 20=74.55%, 50=5.54% 00:19:54.964 lat (msec) : 100=0.01% 00:19:54.964 cpu : usr=4.58%, sys=5.37%, ctx=626, majf=0, minf=1 00:19:54.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:54.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:54.964 issued rwts: total=5120,5220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:54.964 00:19:54.964 Run status group 0 (all jobs): 00:19:54.964 READ: bw=65.4MiB/s (68.6MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.8MB/s), io=65.8MiB (69.0MB), run=1004-1006msec 00:19:54.964 WRITE: bw=69.7MiB/s (73.0MB/s), 13.6MiB/s-20.3MiB/s (14.3MB/s-21.3MB/s), io=70.1MiB (73.5MB), run=1004-1006msec 00:19:54.964 00:19:54.964 Disk stats (read/write): 00:19:54.964 nvme0n1: ios=3617/4012, merge=0/0, ticks=41313/45491, in_queue=86804, util=97.90% 00:19:54.964 nvme0n2: ios=3602/3591, merge=0/0, ticks=44816/40721, in_queue=85537, util=97.77% 00:19:54.964 nvme0n3: ios=2767/3072, merge=0/0, ticks=22523/29114, in_queue=51637, util=90.11% 00:19:54.964 nvme0n4: ios=4114/4487, merge=0/0, ticks=48248/57478, in_queue=105726, util=98.11% 00:19:54.965 00:45:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:54.965 00:45:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1400788 00:19:54.965 00:45:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:54.965 00:45:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:54.965 [global] 00:19:54.965 thread=1 00:19:54.965 invalidate=1 00:19:54.965 rw=read 00:19:54.965 time_based=1 00:19:54.965 runtime=10 00:19:54.965 ioengine=libaio 00:19:54.965 direct=1 00:19:54.965 bs=4096 00:19:54.965 iodepth=1 00:19:54.965 norandommap=1 00:19:54.965 numjobs=1 00:19:54.965 00:19:54.965 [job0] 00:19:54.965 filename=/dev/nvme0n1 00:19:54.965 [job1] 00:19:54.965 filename=/dev/nvme0n2 00:19:54.965 [job2] 00:19:54.965 filename=/dev/nvme0n3 00:19:54.965 [job3] 00:19:54.965 filename=/dev/nvme0n4 00:19:54.965 Could not set queue depth (nvme0n1) 00:19:54.965 Could not set queue depth (nvme0n2) 00:19:54.965 Could not set queue depth (nvme0n3) 00:19:54.965 Could not set queue depth (nvme0n4) 00:19:55.221 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:55.221 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:55.221 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:55.221 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:55.221 fio-3.35 00:19:55.221 Starting 4 threads 00:19:57.744 00:45:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:58.001 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=43474944, buflen=4096 00:19:58.001 fio: pid=1401142, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:58.001 00:45:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:58.259 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=49410048, buflen=4096 00:19:58.259 fio: pid=1401136, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:58.259 00:45:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:58.259 00:45:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:58.259 00:45:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:58.259 00:45:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:58.517 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=552960, buflen=4096 00:19:58.517 fio: pid=1401110, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:58.517 00:45:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:58.517 00:45:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:58.517 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=458752, buflen=4096 00:19:58.517 fio: pid=1401120, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:58.517 00:19:58.517 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1401110: Sat Jul 13 00:45:10 2024 00:19:58.517 read: IOPS=43, BW=174KiB/s (178kB/s)(540KiB/3106msec) 00:19:58.517 slat (usec): min=7, max=1815, avg=30.15, stdev=154.53 00:19:58.517 clat (usec): min=330, max=48988, avg=22779.60, stdev=20435.25 00:19:58.517 lat (usec): min=338, max=49009, avg=22809.80, stdev=20454.35 00:19:58.517 clat percentiles (usec): 00:19:58.517 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 367], 00:19:58.517 | 30.00th=[ 379], 40.00th=[ 396], 50.00th=[40633], 60.00th=[41157], 00:19:58.517 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:19:58.517 | 99.00th=[44827], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:19:58.517 | 99.99th=[49021] 00:19:58.517 bw ( KiB/s): min= 93, max= 512, per=0.63%, avg=176.83, stdev=165.68, samples=6 00:19:58.517 iops : min= 23, max= 128, avg=44.17, stdev=41.45, samples=6 00:19:58.517 lat (usec) : 500=44.12%, 750=0.74% 00:19:58.517 lat (msec) : 50=54.41% 00:19:58.517 cpu : usr=0.00%, sys=0.16%, ctx=139, majf=0, minf=1 00:19:58.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:58.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.517 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.517 issued rwts: total=136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:58.517 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1401120: Sat Jul 13 00:45:10 2024 00:19:58.517 read: IOPS=34, BW=136KiB/s (140kB/s)(448KiB/3286msec) 00:19:58.517 slat (usec): min=7, max=7270, avg=139.01, stdev=927.10 00:19:58.517 clat (usec): min=218, max=49112, avg=29186.12, stdev=18747.99 00:19:58.517 lat (usec): min=230, max=49121, avg=29261.45, stdev=18800.67 00:19:58.517 clat percentiles (usec): 00:19:58.517 | 1.00th=[ 225], 5.00th=[ 255], 10.00th=[ 302], 20.00th=[ 351], 00:19:58.517 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:58.517 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:19:58.517 | 99.00th=[42730], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:19:58.517 | 99.99th=[49021] 00:19:58.517 bw ( KiB/s): min= 96, max= 206, per=0.48%, avg=135.67, stdev=43.75, samples=6 00:19:58.517 iops : min= 24, max= 51, avg=33.83, stdev=10.78, samples=6 00:19:58.517 lat (usec) : 250=3.54%, 500=24.78%, 750=0.88% 00:19:58.517 lat (msec) : 50=69.91% 00:19:58.517 cpu : usr=0.00%, sys=0.27%, ctx=116, majf=0, minf=1 00:19:58.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:58.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.517 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.517 issued rwts: total=113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:58.517 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1401136: Sat Jul 13 00:45:10 2024 00:19:58.517 read: IOPS=4200, BW=16.4MiB/s (17.2MB/s)(47.1MiB/2872msec) 00:19:58.517 slat (nsec): min=7426, max=47793, avg=8396.34, stdev=1305.21 00:19:58.517 clat (usec): min=178, max=1061, avg=225.91, stdev=18.45 00:19:58.517 lat (usec): min=186, max=1070, avg=234.31, stdev=18.60 00:19:58.517 clat percentiles (usec): 00:19:58.517 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 215], 00:19:58.517 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:19:58.517 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 251], 00:19:58.517 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 371], 99.95th=[ 379], 00:19:58.517 | 99.99th=[ 457] 00:19:58.517 bw ( KiB/s): min=16576, max=16984, per=60.44%, avg=16865.60, stdev=166.99, samples=5 00:19:58.517 iops : min= 4144, max= 4246, avg=4216.40, stdev=41.75, samples=5 00:19:58.517 lat (usec) : 250=94.69%, 500=5.29% 00:19:58.517 lat (msec) : 2=0.01% 00:19:58.517 cpu : usr=2.23%, sys=6.97%, ctx=12065, majf=0, minf=1 00:19:58.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:58.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.517 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.517 issued rwts: total=12064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:58.517 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1401142: Sat Jul 13 00:45:10 2024 00:19:58.517 read: IOPS=3962, BW=15.5MiB/s (16.2MB/s)(41.5MiB/2679msec) 00:19:58.517 slat (nsec): min=6201, max=58697, avg=7336.51, stdev=1305.99 00:19:58.517 clat (usec): min=183, max=662, avg=241.95, stdev=29.89 00:19:58.517 lat (usec): min=190, max=670, avg=249.29, stdev=30.02 00:19:58.517 clat percentiles (usec): 00:19:58.517 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 223], 00:19:58.517 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 247], 00:19:58.517 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 269], 00:19:58.517 | 99.00th=[ 400], 99.50th=[ 420], 99.90th=[ 510], 99.95th=[ 545], 00:19:58.517 | 99.99th=[ 635] 00:19:58.517 bw ( KiB/s): min=14704, max=17040, per=57.11%, avg=15937.60, stdev=985.14, samples=5 00:19:58.517 iops : min= 3676, max= 4260, avg=3984.40, stdev=246.29, samples=5 00:19:58.517 lat (usec) : 250=68.74%, 500=31.13%, 750=0.12% 00:19:58.517 cpu : usr=1.16%, sys=3.44%, ctx=10615, majf=0, minf=2 00:19:58.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:58.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.517 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.517 issued rwts: total=10615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:58.517 00:19:58.517 Run status group 0 (all jobs): 00:19:58.517 READ: bw=27.3MiB/s (28.6MB/s), 136KiB/s-16.4MiB/s (140kB/s-17.2MB/s), io=89.5MiB (93.9MB), run=2679-3286msec 00:19:58.517 00:19:58.517 Disk stats (read/write): 00:19:58.517 nvme0n1: ios=136/0, merge=0/0, ticks=3091/0, in_queue=3091, util=95.35% 00:19:58.517 nvme0n2: ios=132/0, merge=0/0, ticks=3652/0, in_queue=3652, util=99.47% 00:19:58.517 nvme0n3: ios=12105/0, merge=0/0, ticks=3511/0, in_queue=3511, util=99.56% 00:19:58.517 nvme0n4: ios=10377/0, merge=0/0, ticks=2448/0, in_queue=2448, util=96.44% 00:19:58.774 00:45:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:58.774 00:45:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:59.032 00:45:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:59.032 00:45:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:59.032 00:45:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:59.032 00:45:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:59.289 00:45:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:59.289 00:45:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:59.546 00:45:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:59.546 00:45:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1400788 00:19:59.546 00:45:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:59.546 00:45:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:59.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:59.546 00:45:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:59.546 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:59.546 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:59.546 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:59.546 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:59.546 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:59.546 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:59.546 00:45:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:59.546 00:45:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:59.546 nvmf hotplug test: fio failed as expected 00:19:59.546 00:45:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:59.803 rmmod nvme_tcp 00:19:59.803 rmmod nvme_fabrics 00:19:59.803 rmmod nvme_keyring 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1397958 ']' 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1397958 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1397958 ']' 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1397958 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.803 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1397958 00:20:00.062 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:00.062 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:00.062 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1397958' 00:20:00.062 killing process with pid 1397958 00:20:00.062 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1397958 00:20:00.062 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1397958 00:20:00.062 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:00.062 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:00.062 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:00.062 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.062 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:00.062 00:45:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.062 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.062 00:45:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.597 00:45:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:02.597 00:20:02.597 real 0m26.644s 00:20:02.597 user 1m46.529s 00:20:02.597 sys 0m8.541s 00:20:02.597 00:45:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:02.597 00:45:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.597 ************************************ 00:20:02.597 END TEST nvmf_fio_target 00:20:02.597 ************************************ 00:20:02.597 00:45:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:02.597 00:45:13 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:02.597 00:45:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:02.597 00:45:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.597 00:45:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:02.597 ************************************ 00:20:02.597 START TEST nvmf_bdevio 00:20:02.597 ************************************ 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:02.597 * Looking for test storage... 00:20:02.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.597 00:45:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:20:02.598 00:45:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:07.875 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:07.875 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.875 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:07.875 Found net devices under 0000:86:00.0: cvl_0_0 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:07.876 Found net devices under 0000:86:00.1: cvl_0_1 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:07.876 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:08.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:20:08.135 00:20:08.135 --- 10.0.0.2 ping statistics --- 00:20:08.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.135 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:20:08.135 00:20:08.135 --- 10.0.0.1 ping statistics --- 00:20:08.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.135 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1405761 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1405761 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1405761 ']' 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.135 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:08.135 [2024-07-13 00:45:19.581778] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:20:08.135 [2024-07-13 00:45:19.581820] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.135 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.135 [2024-07-13 00:45:19.651202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:08.135 [2024-07-13 00:45:19.692493] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.135 [2024-07-13 00:45:19.692529] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.135 [2024-07-13 00:45:19.692537] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.135 [2024-07-13 00:45:19.692543] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.135 [2024-07-13 00:45:19.692548] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.135 [2024-07-13 00:45:19.692601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:08.135 [2024-07-13 00:45:19.692687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:08.135 [2024-07-13 00:45:19.692768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:08.135 [2024-07-13 00:45:19.692769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:08.393 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.393 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:20:08.393 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.393 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:08.393 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:08.393 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:08.394 [2024-07-13 00:45:19.830111] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:08.394 Malloc0 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:08.394 [2024-07-13 00:45:19.881340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:08.394 { 00:20:08.394 "params": { 00:20:08.394 "name": "Nvme$subsystem", 00:20:08.394 "trtype": "$TEST_TRANSPORT", 00:20:08.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.394 "adrfam": "ipv4", 00:20:08.394 "trsvcid": "$NVMF_PORT", 00:20:08.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.394 "hdgst": ${hdgst:-false}, 00:20:08.394 "ddgst": ${ddgst:-false} 00:20:08.394 }, 00:20:08.394 "method": "bdev_nvme_attach_controller" 00:20:08.394 } 00:20:08.394 EOF 00:20:08.394 )") 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:20:08.394 00:45:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:08.394 "params": { 00:20:08.394 "name": "Nvme1", 00:20:08.394 "trtype": "tcp", 00:20:08.394 "traddr": "10.0.0.2", 00:20:08.394 "adrfam": "ipv4", 00:20:08.394 "trsvcid": "4420", 00:20:08.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.394 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.394 "hdgst": false, 00:20:08.394 "ddgst": false 00:20:08.394 }, 00:20:08.394 "method": "bdev_nvme_attach_controller" 00:20:08.394 }' 00:20:08.394 [2024-07-13 00:45:19.930925] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:20:08.394 [2024-07-13 00:45:19.930969] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405784 ] 00:20:08.651 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.651 [2024-07-13 00:45:19.996859] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:08.651 [2024-07-13 00:45:20.040173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.651 [2024-07-13 00:45:20.040284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.651 [2024-07-13 00:45:20.040284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.651 I/O targets: 00:20:08.651 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:08.651 00:20:08.651 00:20:08.651 CUnit - A unit testing framework for C - Version 2.1-3 00:20:08.651 http://cunit.sourceforge.net/ 00:20:08.651 00:20:08.651 00:20:08.651 Suite: bdevio tests on: Nvme1n1 00:20:08.909 Test: blockdev write read block ...passed 00:20:08.909 Test: blockdev write zeroes read block ...passed 00:20:08.909 Test: blockdev write zeroes read no split ...passed 00:20:08.909 Test: blockdev write zeroes read split ...passed 00:20:08.909 Test: blockdev write zeroes read split partial ...passed 00:20:08.909 Test: blockdev reset ...[2024-07-13 00:45:20.346705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:08.909 [2024-07-13 00:45:20.346767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2445070 (9): Bad file descriptor 00:20:08.909 [2024-07-13 00:45:20.358811] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:08.909 passed 00:20:08.909 Test: blockdev write read 8 blocks ...passed 00:20:08.909 Test: blockdev write read size > 128k ...passed 00:20:08.909 Test: blockdev write read invalid size ...passed 00:20:08.909 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:08.909 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:08.909 Test: blockdev write read max offset ...passed 00:20:09.180 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:09.180 Test: blockdev writev readv 8 blocks ...passed 00:20:09.180 Test: blockdev writev readv 30 x 1block ...passed 00:20:09.180 Test: blockdev writev readv block ...passed 00:20:09.180 Test: blockdev writev readv size > 128k ...passed 00:20:09.180 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:09.180 Test: blockdev comparev and writev ...[2024-07-13 00:45:20.609003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:09.180 [2024-07-13 00:45:20.609031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.180 [2024-07-13 00:45:20.609045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:09.180 [2024-07-13 00:45:20.609053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:09.180 [2024-07-13 00:45:20.609299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:09.180 [2024-07-13 00:45:20.609310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:09.180 [2024-07-13 00:45:20.609321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:09.180 [2024-07-13 00:45:20.609328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:09.180 [2024-07-13 00:45:20.609567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:09.180 [2024-07-13 00:45:20.609576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:09.180 [2024-07-13 00:45:20.609587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:09.180 [2024-07-13 00:45:20.609595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:09.180 [2024-07-13 00:45:20.609831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:09.180 [2024-07-13 00:45:20.609840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:09.180 [2024-07-13 00:45:20.609851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:09.180 [2024-07-13 00:45:20.609857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:09.180 passed 00:20:09.180 Test: blockdev nvme passthru rw ...passed 00:20:09.180 Test: blockdev nvme passthru vendor specific ...[2024-07-13 00:45:20.691648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:09.180 [2024-07-13 00:45:20.691666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:09.180 [2024-07-13 00:45:20.691773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:09.180 [2024-07-13 00:45:20.691782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:09.180 [2024-07-13 00:45:20.691885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:09.180 [2024-07-13 00:45:20.691894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:09.180 [2024-07-13 00:45:20.691999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:09.180 [2024-07-13 00:45:20.692008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:09.180 passed 00:20:09.180 Test: blockdev nvme admin passthru ...passed 00:20:09.454 Test: blockdev copy ...passed 00:20:09.454 00:20:09.454 Run Summary: Type Total Ran Passed Failed Inactive 00:20:09.454 suites 1 1 n/a 0 0 00:20:09.454 tests 23 23 23 0 0 00:20:09.454 asserts 152 152 152 0 n/a 00:20:09.454 00:20:09.454 Elapsed time = 1.126 seconds 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:09.454 rmmod nvme_tcp 00:20:09.454 rmmod nvme_fabrics 00:20:09.454 rmmod nvme_keyring 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1405761 ']' 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1405761 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1405761 ']' 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1405761 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.454 00:45:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1405761 00:20:09.454 00:45:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:09.454 00:45:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:09.454 00:45:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1405761' 00:20:09.454 killing process with pid 1405761 00:20:09.454 00:45:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1405761 00:20:09.454 00:45:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1405761 00:20:09.713 00:45:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:09.713 00:45:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:09.713 00:45:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:09.713 00:45:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:09.713 00:45:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:09.713 00:45:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.713 00:45:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.713 00:45:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.247 00:45:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:12.247 00:20:12.247 real 0m9.559s 00:20:12.247 user 0m9.388s 00:20:12.247 sys 0m4.777s 00:20:12.247 00:45:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.247 00:45:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:12.247 ************************************ 00:20:12.247 END TEST nvmf_bdevio 00:20:12.247 ************************************ 00:20:12.247 00:45:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:12.247 00:45:23 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:12.247 00:45:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:12.247 00:45:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.247 00:45:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:12.247 ************************************ 00:20:12.247 START TEST nvmf_auth_target 00:20:12.247 ************************************ 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:12.247 * Looking for test storage... 00:20:12.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:12.247 00:45:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.248 00:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.522 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.522 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:17.522 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:17.522 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:17.522 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:17.522 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:17.522 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:17.522 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:17.522 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:17.522 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:20:17.522 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:17.522 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:20:17.522 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:17.522 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:17.523 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:17.523 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:17.523 Found net devices under 0000:86:00.0: cvl_0_0 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:17.523 Found net devices under 0000:86:00.1: cvl_0_1 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.523 00:45:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.523 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:17.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:20:17.783 00:20:17.783 --- 10.0.0.2 ping statistics --- 00:20:17.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.783 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:20:17.783 00:20:17.783 --- 10.0.0.1 ping statistics --- 00:20:17.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.783 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1409457 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1409457 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1409457 ']' 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.783 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1409554 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3d1c923873c1544f7832331d9ebe2d1e1185f1d862f12622 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Zef 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3d1c923873c1544f7832331d9ebe2d1e1185f1d862f12622 0 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3d1c923873c1544f7832331d9ebe2d1e1185f1d862f12622 0 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3d1c923873c1544f7832331d9ebe2d1e1185f1d862f12622 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Zef 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Zef 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Zef 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c4d4deb00daaa7f6bb480ea303a04dd7cb0aad006c14c09f4ac45670ba9bc189 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.x4K 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c4d4deb00daaa7f6bb480ea303a04dd7cb0aad006c14c09f4ac45670ba9bc189 3 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c4d4deb00daaa7f6bb480ea303a04dd7cb0aad006c14c09f4ac45670ba9bc189 3 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c4d4deb00daaa7f6bb480ea303a04dd7cb0aad006c14c09f4ac45670ba9bc189 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:18.043 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.x4K 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.x4K 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.x4K 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=42c1b26e21c9292fb525b3c55ee87766 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BtS 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 42c1b26e21c9292fb525b3c55ee87766 1 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 42c1b26e21c9292fb525b3c55ee87766 1 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=42c1b26e21c9292fb525b3c55ee87766 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BtS 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BtS 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.BtS 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.303 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c5a94f611054a3b9aa9b366227ac5e123118a4a848e15611 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Yqq 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c5a94f611054a3b9aa9b366227ac5e123118a4a848e15611 2 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c5a94f611054a3b9aa9b366227ac5e123118a4a848e15611 2 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c5a94f611054a3b9aa9b366227ac5e123118a4a848e15611 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Yqq 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Yqq 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Yqq 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2dcbf146c64797263a5e2ca2864bfdfebc87deab0ece72a0 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.pat 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2dcbf146c64797263a5e2ca2864bfdfebc87deab0ece72a0 2 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2dcbf146c64797263a5e2ca2864bfdfebc87deab0ece72a0 2 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2dcbf146c64797263a5e2ca2864bfdfebc87deab0ece72a0 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.pat 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.pat 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.pat 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=648bc1a16faa68b400f6186c25d41da1 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.eCU 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 648bc1a16faa68b400f6186c25d41da1 1 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 648bc1a16faa68b400f6186c25d41da1 1 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=648bc1a16faa68b400f6186c25d41da1 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.eCU 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.eCU 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.eCU 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0e637e4791e0ff91a8ddcce949282de7a8624ec1497848976df3473d41e4cef6 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:18.304 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.CaU 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0e637e4791e0ff91a8ddcce949282de7a8624ec1497848976df3473d41e4cef6 3 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0e637e4791e0ff91a8ddcce949282de7a8624ec1497848976df3473d41e4cef6 3 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0e637e4791e0ff91a8ddcce949282de7a8624ec1497848976df3473d41e4cef6 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.CaU 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.CaU 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.CaU 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1409457 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1409457 ']' 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.564 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.564 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.564 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:18.564 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1409554 /var/tmp/host.sock 00:20:18.564 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1409554 ']' 00:20:18.564 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:20:18.564 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.564 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:18.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:18.564 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.564 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.823 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.823 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:18.823 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:20:18.823 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.823 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.823 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.823 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:18.823 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Zef 00:20:18.823 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.823 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.823 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.823 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Zef 00:20:18.823 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Zef 00:20:19.082 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.x4K ]] 00:20:19.082 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x4K 00:20:19.082 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.082 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.082 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.083 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x4K 00:20:19.083 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x4K 00:20:19.342 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:19.342 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BtS 00:20:19.342 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.342 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.342 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.342 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.BtS 00:20:19.342 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.BtS 00:20:19.342 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Yqq ]] 00:20:19.342 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yqq 00:20:19.342 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.342 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.342 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.342 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yqq 00:20:19.342 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yqq 00:20:19.601 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:19.601 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.pat 00:20:19.601 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.601 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.601 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.601 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.pat 00:20:19.601 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.pat 00:20:19.861 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.eCU ]] 00:20:19.861 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eCU 00:20:19.861 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.861 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.861 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.861 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eCU 00:20:19.861 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eCU 00:20:20.120 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:20.120 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.CaU 00:20:20.120 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.120 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.120 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.120 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.CaU 00:20:20.120 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.CaU 00:20:20.120 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:20:20.120 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:20.120 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.120 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.120 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:20.120 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:20.379 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:20:20.379 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.380 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:20.380 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:20.380 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:20.380 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.380 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.380 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.380 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.380 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.380 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.380 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.639 00:20:20.639 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.639 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.639 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.898 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.898 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.898 00:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.898 00:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.898 00:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.898 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.898 { 00:20:20.898 "cntlid": 1, 00:20:20.898 "qid": 0, 00:20:20.898 "state": "enabled", 00:20:20.898 "thread": "nvmf_tgt_poll_group_000", 00:20:20.898 "listen_address": { 00:20:20.898 "trtype": "TCP", 00:20:20.898 "adrfam": "IPv4", 00:20:20.898 "traddr": "10.0.0.2", 00:20:20.898 "trsvcid": "4420" 00:20:20.898 }, 00:20:20.898 "peer_address": { 00:20:20.898 "trtype": "TCP", 00:20:20.898 "adrfam": "IPv4", 00:20:20.898 "traddr": "10.0.0.1", 00:20:20.898 "trsvcid": "53836" 00:20:20.898 }, 00:20:20.898 "auth": { 00:20:20.898 "state": "completed", 00:20:20.898 "digest": "sha256", 00:20:20.898 "dhgroup": "null" 00:20:20.898 } 00:20:20.898 } 00:20:20.898 ]' 00:20:20.898 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.898 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.898 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.898 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:20.898 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.898 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.898 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.898 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.156 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:20:21.722 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.723 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:21.723 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.723 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.723 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.723 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.723 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:21.723 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:21.982 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:20:21.982 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.982 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:21.982 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:21.982 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:21.982 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.982 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.982 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.982 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.982 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.982 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.982 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.241 00:20:22.241 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.241 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.241 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.241 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.241 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.241 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.241 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.241 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.241 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.241 { 00:20:22.241 "cntlid": 3, 00:20:22.241 "qid": 0, 00:20:22.241 "state": "enabled", 00:20:22.241 "thread": "nvmf_tgt_poll_group_000", 00:20:22.241 "listen_address": { 00:20:22.241 "trtype": "TCP", 00:20:22.241 "adrfam": "IPv4", 00:20:22.241 "traddr": "10.0.0.2", 00:20:22.241 "trsvcid": "4420" 00:20:22.241 }, 00:20:22.241 "peer_address": { 00:20:22.241 "trtype": "TCP", 00:20:22.241 "adrfam": "IPv4", 00:20:22.241 "traddr": "10.0.0.1", 00:20:22.241 "trsvcid": "53848" 00:20:22.241 }, 00:20:22.241 "auth": { 00:20:22.241 "state": "completed", 00:20:22.241 "digest": "sha256", 00:20:22.241 "dhgroup": "null" 00:20:22.241 } 00:20:22.241 } 00:20:22.241 ]' 00:20:22.241 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.500 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.500 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.500 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:22.500 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.500 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.500 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.500 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.759 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.328 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.588 00:20:23.588 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.588 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.588 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.848 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.848 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.848 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.848 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.848 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.848 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.848 { 00:20:23.848 "cntlid": 5, 00:20:23.848 "qid": 0, 00:20:23.848 "state": "enabled", 00:20:23.848 "thread": "nvmf_tgt_poll_group_000", 00:20:23.848 "listen_address": { 00:20:23.848 "trtype": "TCP", 00:20:23.848 "adrfam": "IPv4", 00:20:23.848 "traddr": "10.0.0.2", 00:20:23.848 "trsvcid": "4420" 00:20:23.848 }, 00:20:23.848 "peer_address": { 00:20:23.848 "trtype": "TCP", 00:20:23.848 "adrfam": "IPv4", 00:20:23.848 "traddr": "10.0.0.1", 00:20:23.848 "trsvcid": "53870" 00:20:23.848 }, 00:20:23.848 "auth": { 00:20:23.848 "state": "completed", 00:20:23.848 "digest": "sha256", 00:20:23.848 "dhgroup": "null" 00:20:23.848 } 00:20:23.848 } 00:20:23.848 ]' 00:20:23.848 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.848 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.848 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.848 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:23.848 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.848 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.848 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.848 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.107 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:20:24.676 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.676 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:24.676 00:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.676 00:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.676 00:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.676 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.676 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:24.676 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:24.934 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:20:24.934 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.934 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:24.934 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:24.934 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:24.934 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.934 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:24.934 00:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.934 00:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.934 00:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.934 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.934 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.193 00:20:25.193 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.193 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.193 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.193 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.193 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.193 00:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.193 00:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.193 00:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.193 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.193 { 00:20:25.193 "cntlid": 7, 00:20:25.193 "qid": 0, 00:20:25.193 "state": "enabled", 00:20:25.193 "thread": "nvmf_tgt_poll_group_000", 00:20:25.193 "listen_address": { 00:20:25.193 "trtype": "TCP", 00:20:25.193 "adrfam": "IPv4", 00:20:25.193 "traddr": "10.0.0.2", 00:20:25.193 "trsvcid": "4420" 00:20:25.193 }, 00:20:25.193 "peer_address": { 00:20:25.193 "trtype": "TCP", 00:20:25.193 "adrfam": "IPv4", 00:20:25.193 "traddr": "10.0.0.1", 00:20:25.193 "trsvcid": "53902" 00:20:25.193 }, 00:20:25.193 "auth": { 00:20:25.193 "state": "completed", 00:20:25.193 "digest": "sha256", 00:20:25.193 "dhgroup": "null" 00:20:25.193 } 00:20:25.193 } 00:20:25.193 ]' 00:20:25.193 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.451 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.451 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.451 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:25.451 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.451 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.451 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.451 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.709 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.275 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.534 00:20:26.534 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.534 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.534 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.791 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.791 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.791 00:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.791 00:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.791 00:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.791 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.791 { 00:20:26.791 "cntlid": 9, 00:20:26.791 "qid": 0, 00:20:26.791 "state": "enabled", 00:20:26.791 "thread": "nvmf_tgt_poll_group_000", 00:20:26.791 "listen_address": { 00:20:26.791 "trtype": "TCP", 00:20:26.791 "adrfam": "IPv4", 00:20:26.791 "traddr": "10.0.0.2", 00:20:26.791 "trsvcid": "4420" 00:20:26.791 }, 00:20:26.791 "peer_address": { 00:20:26.791 "trtype": "TCP", 00:20:26.791 "adrfam": "IPv4", 00:20:26.791 "traddr": "10.0.0.1", 00:20:26.791 "trsvcid": "49072" 00:20:26.791 }, 00:20:26.791 "auth": { 00:20:26.791 "state": "completed", 00:20:26.791 "digest": "sha256", 00:20:26.791 "dhgroup": "ffdhe2048" 00:20:26.791 } 00:20:26.791 } 00:20:26.791 ]' 00:20:26.791 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.791 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.791 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.791 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:27.050 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.050 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.050 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.050 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.050 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:20:27.616 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.616 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:27.616 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.616 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.616 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.616 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.616 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:27.616 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:27.874 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:20:27.874 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.874 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:27.874 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:27.874 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:27.874 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.874 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.874 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.874 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.874 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.874 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.874 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.133 00:20:28.133 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.133 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.133 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.392 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.392 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.392 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.392 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.392 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.392 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.392 { 00:20:28.392 "cntlid": 11, 00:20:28.392 "qid": 0, 00:20:28.392 "state": "enabled", 00:20:28.392 "thread": "nvmf_tgt_poll_group_000", 00:20:28.392 "listen_address": { 00:20:28.392 "trtype": "TCP", 00:20:28.392 "adrfam": "IPv4", 00:20:28.392 "traddr": "10.0.0.2", 00:20:28.392 "trsvcid": "4420" 00:20:28.392 }, 00:20:28.392 "peer_address": { 00:20:28.392 "trtype": "TCP", 00:20:28.392 "adrfam": "IPv4", 00:20:28.392 "traddr": "10.0.0.1", 00:20:28.392 "trsvcid": "49106" 00:20:28.392 }, 00:20:28.392 "auth": { 00:20:28.392 "state": "completed", 00:20:28.392 "digest": "sha256", 00:20:28.392 "dhgroup": "ffdhe2048" 00:20:28.392 } 00:20:28.392 } 00:20:28.392 ]' 00:20:28.392 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.392 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.392 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.392 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:28.392 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.392 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.392 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.392 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.650 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:20:29.216 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.216 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:29.216 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.216 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.216 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.216 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.216 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:29.216 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:29.501 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:20:29.501 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.501 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:29.501 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:29.501 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:29.501 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.501 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.501 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.501 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.501 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.501 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.501 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.766 00:20:29.766 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.766 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.766 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.766 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.766 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.766 00:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.766 00:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.766 00:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.766 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.766 { 00:20:29.766 "cntlid": 13, 00:20:29.766 "qid": 0, 00:20:29.766 "state": "enabled", 00:20:29.766 "thread": "nvmf_tgt_poll_group_000", 00:20:29.766 "listen_address": { 00:20:29.766 "trtype": "TCP", 00:20:29.766 "adrfam": "IPv4", 00:20:29.766 "traddr": "10.0.0.2", 00:20:29.766 "trsvcid": "4420" 00:20:29.766 }, 00:20:29.766 "peer_address": { 00:20:29.766 "trtype": "TCP", 00:20:29.766 "adrfam": "IPv4", 00:20:29.766 "traddr": "10.0.0.1", 00:20:29.766 "trsvcid": "49130" 00:20:29.766 }, 00:20:29.766 "auth": { 00:20:29.766 "state": "completed", 00:20:29.766 "digest": "sha256", 00:20:29.766 "dhgroup": "ffdhe2048" 00:20:29.766 } 00:20:29.766 } 00:20:29.766 ]' 00:20:29.766 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.766 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.766 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.025 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.025 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.025 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.025 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.025 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.283 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.849 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:31.108 00:20:31.108 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.108 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.108 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.366 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.366 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.366 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.366 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.366 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.366 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.366 { 00:20:31.366 "cntlid": 15, 00:20:31.366 "qid": 0, 00:20:31.366 "state": "enabled", 00:20:31.366 "thread": "nvmf_tgt_poll_group_000", 00:20:31.366 "listen_address": { 00:20:31.366 "trtype": "TCP", 00:20:31.366 "adrfam": "IPv4", 00:20:31.366 "traddr": "10.0.0.2", 00:20:31.366 "trsvcid": "4420" 00:20:31.366 }, 00:20:31.366 "peer_address": { 00:20:31.366 "trtype": "TCP", 00:20:31.366 "adrfam": "IPv4", 00:20:31.366 "traddr": "10.0.0.1", 00:20:31.366 "trsvcid": "49166" 00:20:31.366 }, 00:20:31.366 "auth": { 00:20:31.366 "state": "completed", 00:20:31.366 "digest": "sha256", 00:20:31.366 "dhgroup": "ffdhe2048" 00:20:31.366 } 00:20:31.366 } 00:20:31.366 ]' 00:20:31.366 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.366 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.366 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.366 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:31.366 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.624 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.624 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.624 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.624 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:20:32.191 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.191 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:32.191 00:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.191 00:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.191 00:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.191 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.191 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.191 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:32.191 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:32.449 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:20:32.449 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.449 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:32.449 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:32.449 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:32.449 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.449 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.449 00:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.449 00:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.449 00:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.449 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.449 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.707 00:20:32.707 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.707 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.707 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.965 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.965 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.965 00:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.965 00:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.965 00:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.965 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.965 { 00:20:32.965 "cntlid": 17, 00:20:32.965 "qid": 0, 00:20:32.965 "state": "enabled", 00:20:32.965 "thread": "nvmf_tgt_poll_group_000", 00:20:32.965 "listen_address": { 00:20:32.965 "trtype": "TCP", 00:20:32.965 "adrfam": "IPv4", 00:20:32.965 "traddr": "10.0.0.2", 00:20:32.965 "trsvcid": "4420" 00:20:32.965 }, 00:20:32.965 "peer_address": { 00:20:32.965 "trtype": "TCP", 00:20:32.965 "adrfam": "IPv4", 00:20:32.965 "traddr": "10.0.0.1", 00:20:32.965 "trsvcid": "49198" 00:20:32.965 }, 00:20:32.965 "auth": { 00:20:32.965 "state": "completed", 00:20:32.965 "digest": "sha256", 00:20:32.965 "dhgroup": "ffdhe3072" 00:20:32.965 } 00:20:32.965 } 00:20:32.965 ]' 00:20:32.965 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.965 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.965 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.965 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:32.965 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.965 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.965 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.966 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.223 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:20:33.788 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.788 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:33.788 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.788 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.788 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.788 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.788 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:33.788 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:34.046 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:20:34.046 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.046 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:34.046 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:34.046 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:34.046 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.046 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.046 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.046 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.046 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.046 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.046 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.304 00:20:34.304 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.304 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.304 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.304 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.304 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.304 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.304 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.562 00:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.562 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.562 { 00:20:34.562 "cntlid": 19, 00:20:34.562 "qid": 0, 00:20:34.562 "state": "enabled", 00:20:34.562 "thread": "nvmf_tgt_poll_group_000", 00:20:34.562 "listen_address": { 00:20:34.562 "trtype": "TCP", 00:20:34.562 "adrfam": "IPv4", 00:20:34.562 "traddr": "10.0.0.2", 00:20:34.562 "trsvcid": "4420" 00:20:34.562 }, 00:20:34.562 "peer_address": { 00:20:34.562 "trtype": "TCP", 00:20:34.562 "adrfam": "IPv4", 00:20:34.562 "traddr": "10.0.0.1", 00:20:34.562 "trsvcid": "49228" 00:20:34.562 }, 00:20:34.562 "auth": { 00:20:34.562 "state": "completed", 00:20:34.562 "digest": "sha256", 00:20:34.562 "dhgroup": "ffdhe3072" 00:20:34.562 } 00:20:34.562 } 00:20:34.562 ]' 00:20:34.562 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.562 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.562 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.562 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:34.562 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.562 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.562 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.562 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.820 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:20:35.386 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.386 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:35.386 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.386 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.386 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.386 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.386 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:35.386 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:35.645 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:35.645 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.645 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:35.645 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:35.645 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:35.645 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.645 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.645 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.645 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.645 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.645 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.645 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.645 00:20:35.903 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.903 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.903 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.903 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.903 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.904 00:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.904 00:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.904 00:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.904 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.904 { 00:20:35.904 "cntlid": 21, 00:20:35.904 "qid": 0, 00:20:35.904 "state": "enabled", 00:20:35.904 "thread": "nvmf_tgt_poll_group_000", 00:20:35.904 "listen_address": { 00:20:35.904 "trtype": "TCP", 00:20:35.904 "adrfam": "IPv4", 00:20:35.904 "traddr": "10.0.0.2", 00:20:35.904 "trsvcid": "4420" 00:20:35.904 }, 00:20:35.904 "peer_address": { 00:20:35.904 "trtype": "TCP", 00:20:35.904 "adrfam": "IPv4", 00:20:35.904 "traddr": "10.0.0.1", 00:20:35.904 "trsvcid": "38394" 00:20:35.904 }, 00:20:35.904 "auth": { 00:20:35.904 "state": "completed", 00:20:35.904 "digest": "sha256", 00:20:35.904 "dhgroup": "ffdhe3072" 00:20:35.904 } 00:20:35.904 } 00:20:35.904 ]' 00:20:35.904 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.904 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.904 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.162 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:36.162 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.162 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.162 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.162 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.420 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.068 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.326 00:20:37.326 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.326 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.326 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.585 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.585 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.585 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.585 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.585 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.585 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.585 { 00:20:37.585 "cntlid": 23, 00:20:37.585 "qid": 0, 00:20:37.585 "state": "enabled", 00:20:37.585 "thread": "nvmf_tgt_poll_group_000", 00:20:37.585 "listen_address": { 00:20:37.585 "trtype": "TCP", 00:20:37.585 "adrfam": "IPv4", 00:20:37.585 "traddr": "10.0.0.2", 00:20:37.585 "trsvcid": "4420" 00:20:37.585 }, 00:20:37.585 "peer_address": { 00:20:37.585 "trtype": "TCP", 00:20:37.585 "adrfam": "IPv4", 00:20:37.585 "traddr": "10.0.0.1", 00:20:37.585 "trsvcid": "38416" 00:20:37.585 }, 00:20:37.585 "auth": { 00:20:37.585 "state": "completed", 00:20:37.585 "digest": "sha256", 00:20:37.585 "dhgroup": "ffdhe3072" 00:20:37.585 } 00:20:37.585 } 00:20:37.585 ]' 00:20:37.585 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.585 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.585 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.585 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.585 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.585 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.585 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.585 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.844 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:20:38.411 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.411 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:38.411 00:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.411 00:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.411 00:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.411 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.411 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.411 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:38.411 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:38.670 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:38.670 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.670 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:38.670 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:38.670 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:38.670 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.670 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.670 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.670 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.670 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.670 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.670 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.928 00:20:38.928 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.928 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.928 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.187 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.187 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.187 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.187 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.187 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.187 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.187 { 00:20:39.187 "cntlid": 25, 00:20:39.187 "qid": 0, 00:20:39.187 "state": "enabled", 00:20:39.187 "thread": "nvmf_tgt_poll_group_000", 00:20:39.187 "listen_address": { 00:20:39.187 "trtype": "TCP", 00:20:39.187 "adrfam": "IPv4", 00:20:39.187 "traddr": "10.0.0.2", 00:20:39.187 "trsvcid": "4420" 00:20:39.187 }, 00:20:39.187 "peer_address": { 00:20:39.187 "trtype": "TCP", 00:20:39.187 "adrfam": "IPv4", 00:20:39.187 "traddr": "10.0.0.1", 00:20:39.187 "trsvcid": "38442" 00:20:39.187 }, 00:20:39.187 "auth": { 00:20:39.187 "state": "completed", 00:20:39.187 "digest": "sha256", 00:20:39.187 "dhgroup": "ffdhe4096" 00:20:39.187 } 00:20:39.187 } 00:20:39.187 ]' 00:20:39.187 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.187 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.187 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.187 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:39.187 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.187 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.187 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.187 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.446 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.014 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.273 00:20:40.273 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.273 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.273 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.531 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.531 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.531 00:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.531 00:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.531 00:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.531 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.531 { 00:20:40.531 "cntlid": 27, 00:20:40.531 "qid": 0, 00:20:40.531 "state": "enabled", 00:20:40.531 "thread": "nvmf_tgt_poll_group_000", 00:20:40.531 "listen_address": { 00:20:40.531 "trtype": "TCP", 00:20:40.531 "adrfam": "IPv4", 00:20:40.531 "traddr": "10.0.0.2", 00:20:40.531 "trsvcid": "4420" 00:20:40.531 }, 00:20:40.531 "peer_address": { 00:20:40.531 "trtype": "TCP", 00:20:40.531 "adrfam": "IPv4", 00:20:40.531 "traddr": "10.0.0.1", 00:20:40.531 "trsvcid": "38476" 00:20:40.531 }, 00:20:40.531 "auth": { 00:20:40.531 "state": "completed", 00:20:40.531 "digest": "sha256", 00:20:40.531 "dhgroup": "ffdhe4096" 00:20:40.531 } 00:20:40.531 } 00:20:40.531 ]' 00:20:40.531 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.531 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.531 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.789 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.789 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.789 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.789 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.789 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.789 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:20:41.356 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.356 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:41.356 00:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.356 00:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.356 00:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.356 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.356 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:41.356 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:41.614 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:41.614 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.614 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:41.615 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:41.615 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:41.615 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.615 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.615 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.615 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.615 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.615 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.615 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.872 00:20:41.872 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.872 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.872 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.131 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.131 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.131 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.131 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.131 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.131 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.131 { 00:20:42.131 "cntlid": 29, 00:20:42.131 "qid": 0, 00:20:42.131 "state": "enabled", 00:20:42.131 "thread": "nvmf_tgt_poll_group_000", 00:20:42.131 "listen_address": { 00:20:42.131 "trtype": "TCP", 00:20:42.131 "adrfam": "IPv4", 00:20:42.131 "traddr": "10.0.0.2", 00:20:42.131 "trsvcid": "4420" 00:20:42.131 }, 00:20:42.131 "peer_address": { 00:20:42.131 "trtype": "TCP", 00:20:42.131 "adrfam": "IPv4", 00:20:42.131 "traddr": "10.0.0.1", 00:20:42.131 "trsvcid": "38500" 00:20:42.131 }, 00:20:42.131 "auth": { 00:20:42.131 "state": "completed", 00:20:42.131 "digest": "sha256", 00:20:42.131 "dhgroup": "ffdhe4096" 00:20:42.131 } 00:20:42.131 } 00:20:42.131 ]' 00:20:42.131 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.131 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.131 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.131 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:42.131 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.389 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.389 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.389 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.389 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:20:42.956 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.956 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:42.956 00:45:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.956 00:45:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.956 00:45:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.956 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.956 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:42.956 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.215 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:43.215 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.215 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:43.215 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:43.215 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:43.215 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.215 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:43.215 00:45:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.215 00:45:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.215 00:45:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.215 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.215 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.475 00:20:43.475 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.475 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.475 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.733 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.734 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.734 00:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.734 00:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.734 00:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.734 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.734 { 00:20:43.734 "cntlid": 31, 00:20:43.734 "qid": 0, 00:20:43.734 "state": "enabled", 00:20:43.734 "thread": "nvmf_tgt_poll_group_000", 00:20:43.734 "listen_address": { 00:20:43.734 "trtype": "TCP", 00:20:43.734 "adrfam": "IPv4", 00:20:43.734 "traddr": "10.0.0.2", 00:20:43.734 "trsvcid": "4420" 00:20:43.734 }, 00:20:43.734 "peer_address": { 00:20:43.734 "trtype": "TCP", 00:20:43.734 "adrfam": "IPv4", 00:20:43.734 "traddr": "10.0.0.1", 00:20:43.734 "trsvcid": "38512" 00:20:43.734 }, 00:20:43.734 "auth": { 00:20:43.734 "state": "completed", 00:20:43.734 "digest": "sha256", 00:20:43.734 "dhgroup": "ffdhe4096" 00:20:43.734 } 00:20:43.734 } 00:20:43.734 ]' 00:20:43.734 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.734 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:43.734 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.734 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:43.734 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.734 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.734 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.734 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.993 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:20:44.560 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.560 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:44.560 00:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.560 00:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.561 00:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.561 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.561 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.561 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:44.561 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:44.820 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:44.820 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.820 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:44.820 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:44.820 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:44.820 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.820 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.820 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.820 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.820 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.820 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.821 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.079 00:20:45.079 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.079 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.079 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.337 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.337 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.337 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.337 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.337 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.337 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.337 { 00:20:45.337 "cntlid": 33, 00:20:45.337 "qid": 0, 00:20:45.337 "state": "enabled", 00:20:45.337 "thread": "nvmf_tgt_poll_group_000", 00:20:45.337 "listen_address": { 00:20:45.337 "trtype": "TCP", 00:20:45.337 "adrfam": "IPv4", 00:20:45.337 "traddr": "10.0.0.2", 00:20:45.337 "trsvcid": "4420" 00:20:45.337 }, 00:20:45.337 "peer_address": { 00:20:45.337 "trtype": "TCP", 00:20:45.337 "adrfam": "IPv4", 00:20:45.337 "traddr": "10.0.0.1", 00:20:45.337 "trsvcid": "38540" 00:20:45.337 }, 00:20:45.337 "auth": { 00:20:45.337 "state": "completed", 00:20:45.337 "digest": "sha256", 00:20:45.337 "dhgroup": "ffdhe6144" 00:20:45.337 } 00:20:45.337 } 00:20:45.337 ]' 00:20:45.337 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.337 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.337 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.337 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.337 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.337 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.337 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.337 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.594 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:20:46.160 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.160 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:46.160 00:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.160 00:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.160 00:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.160 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.160 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:46.160 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:46.419 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:46.419 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.419 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:46.419 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:46.419 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:46.419 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.419 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.419 00:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.419 00:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.419 00:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.419 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.419 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.677 00:20:46.677 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.677 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.677 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.936 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.936 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.936 00:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.936 00:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.936 00:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.936 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.936 { 00:20:46.936 "cntlid": 35, 00:20:46.936 "qid": 0, 00:20:46.936 "state": "enabled", 00:20:46.936 "thread": "nvmf_tgt_poll_group_000", 00:20:46.936 "listen_address": { 00:20:46.936 "trtype": "TCP", 00:20:46.936 "adrfam": "IPv4", 00:20:46.936 "traddr": "10.0.0.2", 00:20:46.936 "trsvcid": "4420" 00:20:46.936 }, 00:20:46.936 "peer_address": { 00:20:46.936 "trtype": "TCP", 00:20:46.936 "adrfam": "IPv4", 00:20:46.936 "traddr": "10.0.0.1", 00:20:46.936 "trsvcid": "34264" 00:20:46.936 }, 00:20:46.936 "auth": { 00:20:46.936 "state": "completed", 00:20:46.936 "digest": "sha256", 00:20:46.936 "dhgroup": "ffdhe6144" 00:20:46.936 } 00:20:46.936 } 00:20:46.936 ]' 00:20:46.936 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.936 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.936 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.936 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:46.936 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.936 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.936 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.936 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.195 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:20:47.762 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.762 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:47.762 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.762 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.762 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.762 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.762 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:47.762 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:48.021 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:48.021 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.021 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:48.021 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:48.021 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:48.021 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.021 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.021 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.021 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.021 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.021 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.021 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.280 00:20:48.280 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.280 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.280 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.539 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.539 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.539 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.539 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.539 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.539 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.539 { 00:20:48.539 "cntlid": 37, 00:20:48.539 "qid": 0, 00:20:48.539 "state": "enabled", 00:20:48.539 "thread": "nvmf_tgt_poll_group_000", 00:20:48.539 "listen_address": { 00:20:48.539 "trtype": "TCP", 00:20:48.539 "adrfam": "IPv4", 00:20:48.539 "traddr": "10.0.0.2", 00:20:48.539 "trsvcid": "4420" 00:20:48.539 }, 00:20:48.539 "peer_address": { 00:20:48.539 "trtype": "TCP", 00:20:48.539 "adrfam": "IPv4", 00:20:48.539 "traddr": "10.0.0.1", 00:20:48.539 "trsvcid": "34280" 00:20:48.539 }, 00:20:48.539 "auth": { 00:20:48.539 "state": "completed", 00:20:48.539 "digest": "sha256", 00:20:48.539 "dhgroup": "ffdhe6144" 00:20:48.539 } 00:20:48.539 } 00:20:48.539 ]' 00:20:48.539 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.539 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.539 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.539 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:48.539 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.539 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.539 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.539 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.797 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:20:49.362 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.362 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:49.362 00:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.362 00:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.362 00:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.362 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.362 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:49.362 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:49.620 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:49.620 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.620 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:49.620 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:49.620 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:49.620 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.620 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:49.620 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.620 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.620 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.620 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.620 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.877 00:20:49.877 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.877 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.877 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.135 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.135 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.135 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.135 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.135 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.135 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.135 { 00:20:50.135 "cntlid": 39, 00:20:50.135 "qid": 0, 00:20:50.135 "state": "enabled", 00:20:50.135 "thread": "nvmf_tgt_poll_group_000", 00:20:50.135 "listen_address": { 00:20:50.135 "trtype": "TCP", 00:20:50.135 "adrfam": "IPv4", 00:20:50.135 "traddr": "10.0.0.2", 00:20:50.135 "trsvcid": "4420" 00:20:50.135 }, 00:20:50.135 "peer_address": { 00:20:50.135 "trtype": "TCP", 00:20:50.135 "adrfam": "IPv4", 00:20:50.135 "traddr": "10.0.0.1", 00:20:50.135 "trsvcid": "34306" 00:20:50.135 }, 00:20:50.135 "auth": { 00:20:50.135 "state": "completed", 00:20:50.135 "digest": "sha256", 00:20:50.135 "dhgroup": "ffdhe6144" 00:20:50.135 } 00:20:50.135 } 00:20:50.135 ]' 00:20:50.135 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.135 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.135 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.135 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:50.135 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.135 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.135 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.135 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.393 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:20:50.959 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.959 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:50.959 00:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.959 00:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.959 00:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.959 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.959 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.959 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:50.959 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:51.217 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:51.217 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.217 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:51.217 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:51.217 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:51.217 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.217 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.217 00:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.217 00:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.217 00:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.217 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.217 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.787 00:20:51.787 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.787 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.787 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.787 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.787 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.787 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.787 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.787 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.787 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.787 { 00:20:51.787 "cntlid": 41, 00:20:51.787 "qid": 0, 00:20:51.787 "state": "enabled", 00:20:51.787 "thread": "nvmf_tgt_poll_group_000", 00:20:51.787 "listen_address": { 00:20:51.787 "trtype": "TCP", 00:20:51.787 "adrfam": "IPv4", 00:20:51.787 "traddr": "10.0.0.2", 00:20:51.787 "trsvcid": "4420" 00:20:51.787 }, 00:20:51.787 "peer_address": { 00:20:51.787 "trtype": "TCP", 00:20:51.787 "adrfam": "IPv4", 00:20:51.787 "traddr": "10.0.0.1", 00:20:51.787 "trsvcid": "34340" 00:20:51.787 }, 00:20:51.787 "auth": { 00:20:51.787 "state": "completed", 00:20:51.787 "digest": "sha256", 00:20:51.787 "dhgroup": "ffdhe8192" 00:20:51.787 } 00:20:51.787 } 00:20:51.787 ]' 00:20:51.787 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.787 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.045 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.045 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:52.045 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.045 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.045 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.045 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.302 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:20:52.867 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.867 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:52.867 00:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.868 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.434 00:20:53.434 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.434 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.434 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.692 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.692 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.692 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.692 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.692 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.692 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.692 { 00:20:53.692 "cntlid": 43, 00:20:53.692 "qid": 0, 00:20:53.692 "state": "enabled", 00:20:53.692 "thread": "nvmf_tgt_poll_group_000", 00:20:53.692 "listen_address": { 00:20:53.692 "trtype": "TCP", 00:20:53.692 "adrfam": "IPv4", 00:20:53.692 "traddr": "10.0.0.2", 00:20:53.692 "trsvcid": "4420" 00:20:53.692 }, 00:20:53.692 "peer_address": { 00:20:53.692 "trtype": "TCP", 00:20:53.692 "adrfam": "IPv4", 00:20:53.692 "traddr": "10.0.0.1", 00:20:53.692 "trsvcid": "34376" 00:20:53.692 }, 00:20:53.692 "auth": { 00:20:53.692 "state": "completed", 00:20:53.692 "digest": "sha256", 00:20:53.692 "dhgroup": "ffdhe8192" 00:20:53.692 } 00:20:53.692 } 00:20:53.692 ]' 00:20:53.692 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.692 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:53.692 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.692 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.692 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.692 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.692 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.692 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.950 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:20:54.517 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.517 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:54.517 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.517 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.517 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.517 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.517 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.517 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.776 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:54.776 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.776 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:54.776 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:54.776 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:54.776 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.776 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.776 00:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.776 00:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.776 00:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.776 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.776 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.034 00:20:55.034 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.034 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.034 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.293 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.293 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.293 00:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.293 00:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.293 00:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.293 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.293 { 00:20:55.293 "cntlid": 45, 00:20:55.293 "qid": 0, 00:20:55.293 "state": "enabled", 00:20:55.293 "thread": "nvmf_tgt_poll_group_000", 00:20:55.293 "listen_address": { 00:20:55.293 "trtype": "TCP", 00:20:55.293 "adrfam": "IPv4", 00:20:55.293 "traddr": "10.0.0.2", 00:20:55.293 "trsvcid": "4420" 00:20:55.293 }, 00:20:55.293 "peer_address": { 00:20:55.293 "trtype": "TCP", 00:20:55.293 "adrfam": "IPv4", 00:20:55.293 "traddr": "10.0.0.1", 00:20:55.293 "trsvcid": "34398" 00:20:55.293 }, 00:20:55.293 "auth": { 00:20:55.293 "state": "completed", 00:20:55.293 "digest": "sha256", 00:20:55.293 "dhgroup": "ffdhe8192" 00:20:55.293 } 00:20:55.293 } 00:20:55.293 ]' 00:20:55.293 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.293 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.293 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.551 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:55.551 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.551 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.551 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.551 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.551 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:20:56.118 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.118 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:56.118 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.118 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.118 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.118 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.118 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.118 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.377 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:56.377 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.377 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:56.377 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:56.377 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:56.377 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.377 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:56.377 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.377 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.377 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.377 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.377 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.943 00:20:56.943 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.943 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.943 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.202 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.202 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.202 00:46:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.202 00:46:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.202 00:46:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.202 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.202 { 00:20:57.202 "cntlid": 47, 00:20:57.202 "qid": 0, 00:20:57.202 "state": "enabled", 00:20:57.202 "thread": "nvmf_tgt_poll_group_000", 00:20:57.202 "listen_address": { 00:20:57.202 "trtype": "TCP", 00:20:57.202 "adrfam": "IPv4", 00:20:57.202 "traddr": "10.0.0.2", 00:20:57.202 "trsvcid": "4420" 00:20:57.202 }, 00:20:57.202 "peer_address": { 00:20:57.202 "trtype": "TCP", 00:20:57.202 "adrfam": "IPv4", 00:20:57.202 "traddr": "10.0.0.1", 00:20:57.202 "trsvcid": "54640" 00:20:57.202 }, 00:20:57.202 "auth": { 00:20:57.202 "state": "completed", 00:20:57.202 "digest": "sha256", 00:20:57.202 "dhgroup": "ffdhe8192" 00:20:57.202 } 00:20:57.202 } 00:20:57.202 ]' 00:20:57.202 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.202 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.202 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.202 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.202 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.202 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.202 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.202 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.461 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:20:58.028 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.028 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:58.028 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.028 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.028 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.028 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:58.028 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.028 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.028 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:58.028 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:58.286 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:58.286 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.286 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.286 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:58.286 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:58.286 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.286 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.286 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.286 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.286 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.286 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.286 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.287 00:20:58.545 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.545 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.545 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.545 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.545 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.545 00:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.545 00:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.545 00:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.545 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.545 { 00:20:58.545 "cntlid": 49, 00:20:58.545 "qid": 0, 00:20:58.545 "state": "enabled", 00:20:58.545 "thread": "nvmf_tgt_poll_group_000", 00:20:58.545 "listen_address": { 00:20:58.545 "trtype": "TCP", 00:20:58.545 "adrfam": "IPv4", 00:20:58.545 "traddr": "10.0.0.2", 00:20:58.545 "trsvcid": "4420" 00:20:58.545 }, 00:20:58.545 "peer_address": { 00:20:58.545 "trtype": "TCP", 00:20:58.545 "adrfam": "IPv4", 00:20:58.545 "traddr": "10.0.0.1", 00:20:58.545 "trsvcid": "54670" 00:20:58.545 }, 00:20:58.545 "auth": { 00:20:58.545 "state": "completed", 00:20:58.545 "digest": "sha384", 00:20:58.545 "dhgroup": "null" 00:20:58.545 } 00:20:58.545 } 00:20:58.545 ]' 00:20:58.545 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.803 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.803 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.803 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:58.803 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.803 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.803 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.803 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.061 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:20:59.629 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.629 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:59.629 00:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.629 00:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.629 00:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.629 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.629 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:59.629 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:59.629 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:59.629 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.629 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:59.629 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:59.629 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:59.629 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.630 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.630 00:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.630 00:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.630 00:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.630 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.630 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.888 00:20:59.888 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.888 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.888 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.146 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.146 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.146 00:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.146 00:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.146 00:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.146 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.146 { 00:21:00.146 "cntlid": 51, 00:21:00.146 "qid": 0, 00:21:00.146 "state": "enabled", 00:21:00.146 "thread": "nvmf_tgt_poll_group_000", 00:21:00.146 "listen_address": { 00:21:00.146 "trtype": "TCP", 00:21:00.146 "adrfam": "IPv4", 00:21:00.146 "traddr": "10.0.0.2", 00:21:00.146 "trsvcid": "4420" 00:21:00.146 }, 00:21:00.146 "peer_address": { 00:21:00.146 "trtype": "TCP", 00:21:00.146 "adrfam": "IPv4", 00:21:00.146 "traddr": "10.0.0.1", 00:21:00.146 "trsvcid": "54702" 00:21:00.146 }, 00:21:00.146 "auth": { 00:21:00.146 "state": "completed", 00:21:00.146 "digest": "sha384", 00:21:00.146 "dhgroup": "null" 00:21:00.146 } 00:21:00.146 } 00:21:00.146 ]' 00:21:00.146 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.146 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.146 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.146 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:00.146 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.404 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.404 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.404 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.404 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:21:00.969 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.969 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:00.969 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.969 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.969 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.969 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.969 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:00.969 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:01.226 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:21:01.226 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.226 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:01.226 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:01.226 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:01.226 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.226 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.226 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.226 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.226 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.226 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.226 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.483 00:21:01.483 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.483 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.483 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.740 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.740 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.740 00:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.740 00:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.740 00:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.740 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.740 { 00:21:01.740 "cntlid": 53, 00:21:01.740 "qid": 0, 00:21:01.740 "state": "enabled", 00:21:01.740 "thread": "nvmf_tgt_poll_group_000", 00:21:01.740 "listen_address": { 00:21:01.740 "trtype": "TCP", 00:21:01.740 "adrfam": "IPv4", 00:21:01.740 "traddr": "10.0.0.2", 00:21:01.740 "trsvcid": "4420" 00:21:01.740 }, 00:21:01.740 "peer_address": { 00:21:01.740 "trtype": "TCP", 00:21:01.740 "adrfam": "IPv4", 00:21:01.740 "traddr": "10.0.0.1", 00:21:01.740 "trsvcid": "54728" 00:21:01.740 }, 00:21:01.740 "auth": { 00:21:01.740 "state": "completed", 00:21:01.740 "digest": "sha384", 00:21:01.740 "dhgroup": "null" 00:21:01.740 } 00:21:01.740 } 00:21:01.740 ]' 00:21:01.740 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.740 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.740 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.740 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:01.740 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.740 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.740 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.740 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.997 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:21:02.563 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.563 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:02.563 00:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.563 00:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.563 00:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.563 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.563 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:02.563 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:02.821 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:21:02.821 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.821 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:02.821 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:02.821 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:02.821 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.821 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:02.821 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.821 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.821 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.821 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.821 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.821 00:21:03.079 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.079 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.079 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.079 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.079 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.079 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.079 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.079 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.079 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.079 { 00:21:03.079 "cntlid": 55, 00:21:03.079 "qid": 0, 00:21:03.079 "state": "enabled", 00:21:03.079 "thread": "nvmf_tgt_poll_group_000", 00:21:03.079 "listen_address": { 00:21:03.079 "trtype": "TCP", 00:21:03.079 "adrfam": "IPv4", 00:21:03.079 "traddr": "10.0.0.2", 00:21:03.079 "trsvcid": "4420" 00:21:03.079 }, 00:21:03.079 "peer_address": { 00:21:03.079 "trtype": "TCP", 00:21:03.079 "adrfam": "IPv4", 00:21:03.079 "traddr": "10.0.0.1", 00:21:03.079 "trsvcid": "54754" 00:21:03.079 }, 00:21:03.079 "auth": { 00:21:03.079 "state": "completed", 00:21:03.079 "digest": "sha384", 00:21:03.079 "dhgroup": "null" 00:21:03.079 } 00:21:03.079 } 00:21:03.079 ]' 00:21:03.079 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.079 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.079 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.336 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:03.336 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.336 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.336 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.336 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.336 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:21:03.900 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.900 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:03.900 00:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.900 00:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.900 00:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.900 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.900 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.900 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:03.900 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:04.176 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:21:04.176 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.176 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:04.176 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:04.176 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:04.176 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.176 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.176 00:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.176 00:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.176 00:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.176 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.176 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.500 00:21:04.500 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.500 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.500 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.758 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.758 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.758 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.758 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.758 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.758 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.758 { 00:21:04.758 "cntlid": 57, 00:21:04.758 "qid": 0, 00:21:04.758 "state": "enabled", 00:21:04.758 "thread": "nvmf_tgt_poll_group_000", 00:21:04.758 "listen_address": { 00:21:04.758 "trtype": "TCP", 00:21:04.758 "adrfam": "IPv4", 00:21:04.758 "traddr": "10.0.0.2", 00:21:04.758 "trsvcid": "4420" 00:21:04.758 }, 00:21:04.758 "peer_address": { 00:21:04.758 "trtype": "TCP", 00:21:04.758 "adrfam": "IPv4", 00:21:04.758 "traddr": "10.0.0.1", 00:21:04.758 "trsvcid": "54776" 00:21:04.758 }, 00:21:04.758 "auth": { 00:21:04.758 "state": "completed", 00:21:04.758 "digest": "sha384", 00:21:04.758 "dhgroup": "ffdhe2048" 00:21:04.758 } 00:21:04.758 } 00:21:04.758 ]' 00:21:04.758 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.758 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.758 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.758 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:04.758 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.758 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.758 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.758 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.015 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:21:05.596 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.596 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:05.596 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.596 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.596 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.596 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.596 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:05.596 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:05.853 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:21:05.853 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.853 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:05.853 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:05.853 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:05.853 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.853 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.853 00:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.853 00:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.853 00:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.853 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.853 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.853 00:21:06.111 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.111 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.111 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.111 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.111 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.111 00:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.111 00:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.111 00:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.111 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.111 { 00:21:06.111 "cntlid": 59, 00:21:06.111 "qid": 0, 00:21:06.111 "state": "enabled", 00:21:06.111 "thread": "nvmf_tgt_poll_group_000", 00:21:06.111 "listen_address": { 00:21:06.111 "trtype": "TCP", 00:21:06.111 "adrfam": "IPv4", 00:21:06.111 "traddr": "10.0.0.2", 00:21:06.111 "trsvcid": "4420" 00:21:06.111 }, 00:21:06.111 "peer_address": { 00:21:06.111 "trtype": "TCP", 00:21:06.111 "adrfam": "IPv4", 00:21:06.111 "traddr": "10.0.0.1", 00:21:06.111 "trsvcid": "44690" 00:21:06.111 }, 00:21:06.111 "auth": { 00:21:06.111 "state": "completed", 00:21:06.111 "digest": "sha384", 00:21:06.111 "dhgroup": "ffdhe2048" 00:21:06.111 } 00:21:06.111 } 00:21:06.111 ]' 00:21:06.111 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.111 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.111 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.368 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.368 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.368 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.368 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.368 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.368 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:21:06.933 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.933 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:06.933 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.933 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.191 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.449 00:21:07.449 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.449 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.449 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.708 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.708 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.708 00:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.708 00:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.708 00:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.708 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.708 { 00:21:07.708 "cntlid": 61, 00:21:07.708 "qid": 0, 00:21:07.708 "state": "enabled", 00:21:07.708 "thread": "nvmf_tgt_poll_group_000", 00:21:07.708 "listen_address": { 00:21:07.708 "trtype": "TCP", 00:21:07.708 "adrfam": "IPv4", 00:21:07.708 "traddr": "10.0.0.2", 00:21:07.708 "trsvcid": "4420" 00:21:07.708 }, 00:21:07.708 "peer_address": { 00:21:07.708 "trtype": "TCP", 00:21:07.708 "adrfam": "IPv4", 00:21:07.708 "traddr": "10.0.0.1", 00:21:07.708 "trsvcid": "44714" 00:21:07.708 }, 00:21:07.708 "auth": { 00:21:07.708 "state": "completed", 00:21:07.708 "digest": "sha384", 00:21:07.708 "dhgroup": "ffdhe2048" 00:21:07.708 } 00:21:07.708 } 00:21:07.708 ]' 00:21:07.708 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.708 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.708 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.708 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:07.708 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.708 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.708 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.708 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.967 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:21:08.534 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.534 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:08.534 00:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.534 00:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.534 00:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.534 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.534 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:08.534 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:08.793 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:21:08.793 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.793 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:08.793 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:08.793 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:08.793 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.793 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:08.793 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.793 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.793 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.793 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.793 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.053 00:21:09.053 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.053 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.053 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.312 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.312 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.312 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.312 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.312 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.312 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.312 { 00:21:09.312 "cntlid": 63, 00:21:09.312 "qid": 0, 00:21:09.312 "state": "enabled", 00:21:09.312 "thread": "nvmf_tgt_poll_group_000", 00:21:09.312 "listen_address": { 00:21:09.312 "trtype": "TCP", 00:21:09.312 "adrfam": "IPv4", 00:21:09.312 "traddr": "10.0.0.2", 00:21:09.312 "trsvcid": "4420" 00:21:09.312 }, 00:21:09.312 "peer_address": { 00:21:09.312 "trtype": "TCP", 00:21:09.312 "adrfam": "IPv4", 00:21:09.312 "traddr": "10.0.0.1", 00:21:09.312 "trsvcid": "44740" 00:21:09.312 }, 00:21:09.312 "auth": { 00:21:09.312 "state": "completed", 00:21:09.312 "digest": "sha384", 00:21:09.312 "dhgroup": "ffdhe2048" 00:21:09.312 } 00:21:09.312 } 00:21:09.312 ]' 00:21:09.312 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.312 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.312 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.312 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.312 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.312 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.312 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.312 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.571 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.139 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.398 00:21:10.398 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.398 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.398 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.657 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.657 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.657 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.657 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.657 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.657 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.657 { 00:21:10.657 "cntlid": 65, 00:21:10.657 "qid": 0, 00:21:10.657 "state": "enabled", 00:21:10.657 "thread": "nvmf_tgt_poll_group_000", 00:21:10.657 "listen_address": { 00:21:10.657 "trtype": "TCP", 00:21:10.657 "adrfam": "IPv4", 00:21:10.657 "traddr": "10.0.0.2", 00:21:10.657 "trsvcid": "4420" 00:21:10.657 }, 00:21:10.657 "peer_address": { 00:21:10.657 "trtype": "TCP", 00:21:10.657 "adrfam": "IPv4", 00:21:10.657 "traddr": "10.0.0.1", 00:21:10.657 "trsvcid": "44764" 00:21:10.657 }, 00:21:10.657 "auth": { 00:21:10.657 "state": "completed", 00:21:10.657 "digest": "sha384", 00:21:10.657 "dhgroup": "ffdhe3072" 00:21:10.657 } 00:21:10.657 } 00:21:10.657 ]' 00:21:10.657 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.657 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.657 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.657 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.657 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.916 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.916 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.916 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.916 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:21:11.483 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.484 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:11.484 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.484 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.484 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.484 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.484 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:11.484 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:11.741 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:21:11.741 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.741 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:11.741 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:11.741 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:11.741 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.741 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.741 00:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.741 00:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.741 00:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.741 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.741 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.998 00:21:11.998 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.998 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.998 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.257 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.257 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.257 00:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.257 00:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.257 00:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.257 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.257 { 00:21:12.257 "cntlid": 67, 00:21:12.257 "qid": 0, 00:21:12.257 "state": "enabled", 00:21:12.257 "thread": "nvmf_tgt_poll_group_000", 00:21:12.257 "listen_address": { 00:21:12.257 "trtype": "TCP", 00:21:12.257 "adrfam": "IPv4", 00:21:12.257 "traddr": "10.0.0.2", 00:21:12.257 "trsvcid": "4420" 00:21:12.257 }, 00:21:12.257 "peer_address": { 00:21:12.257 "trtype": "TCP", 00:21:12.257 "adrfam": "IPv4", 00:21:12.257 "traddr": "10.0.0.1", 00:21:12.257 "trsvcid": "44790" 00:21:12.257 }, 00:21:12.257 "auth": { 00:21:12.257 "state": "completed", 00:21:12.257 "digest": "sha384", 00:21:12.257 "dhgroup": "ffdhe3072" 00:21:12.257 } 00:21:12.257 } 00:21:12.257 ]' 00:21:12.257 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.257 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.257 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.257 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:12.257 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.257 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.257 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.257 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.515 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:21:13.082 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.082 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:13.082 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.082 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.082 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.082 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.082 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:13.082 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:13.341 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:21:13.341 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.341 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:13.341 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:13.341 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:13.341 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.341 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.341 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.341 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.341 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.341 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.341 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.599 00:21:13.600 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.600 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.600 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.600 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.600 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.600 00:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.600 00:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.600 00:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.600 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.600 { 00:21:13.600 "cntlid": 69, 00:21:13.600 "qid": 0, 00:21:13.600 "state": "enabled", 00:21:13.600 "thread": "nvmf_tgt_poll_group_000", 00:21:13.600 "listen_address": { 00:21:13.600 "trtype": "TCP", 00:21:13.600 "adrfam": "IPv4", 00:21:13.600 "traddr": "10.0.0.2", 00:21:13.600 "trsvcid": "4420" 00:21:13.600 }, 00:21:13.600 "peer_address": { 00:21:13.600 "trtype": "TCP", 00:21:13.600 "adrfam": "IPv4", 00:21:13.600 "traddr": "10.0.0.1", 00:21:13.600 "trsvcid": "44826" 00:21:13.600 }, 00:21:13.600 "auth": { 00:21:13.600 "state": "completed", 00:21:13.600 "digest": "sha384", 00:21:13.600 "dhgroup": "ffdhe3072" 00:21:13.600 } 00:21:13.600 } 00:21:13.600 ]' 00:21:13.858 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.858 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.858 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.858 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.858 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.858 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.858 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.858 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.117 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:21:14.684 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.684 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:14.684 00:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.684 00:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.684 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.684 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.684 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:14.684 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:14.684 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:14.684 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.684 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:14.684 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:14.684 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:14.684 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.684 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:14.684 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.684 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.684 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.685 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.685 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.943 00:21:14.943 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.943 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.943 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.202 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.202 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.202 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.202 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.202 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.202 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.202 { 00:21:15.202 "cntlid": 71, 00:21:15.202 "qid": 0, 00:21:15.202 "state": "enabled", 00:21:15.202 "thread": "nvmf_tgt_poll_group_000", 00:21:15.202 "listen_address": { 00:21:15.202 "trtype": "TCP", 00:21:15.202 "adrfam": "IPv4", 00:21:15.202 "traddr": "10.0.0.2", 00:21:15.202 "trsvcid": "4420" 00:21:15.202 }, 00:21:15.202 "peer_address": { 00:21:15.202 "trtype": "TCP", 00:21:15.202 "adrfam": "IPv4", 00:21:15.202 "traddr": "10.0.0.1", 00:21:15.202 "trsvcid": "44852" 00:21:15.202 }, 00:21:15.202 "auth": { 00:21:15.202 "state": "completed", 00:21:15.202 "digest": "sha384", 00:21:15.202 "dhgroup": "ffdhe3072" 00:21:15.202 } 00:21:15.202 } 00:21:15.202 ]' 00:21:15.202 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.202 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.202 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.202 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.202 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.461 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.461 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.461 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.461 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:21:16.028 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.028 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.028 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.028 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.028 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.028 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:16.028 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.028 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:16.028 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:16.287 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:16.287 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.287 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:16.287 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:16.287 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:16.287 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.287 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.287 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.287 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.287 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.287 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.287 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.545 00:21:16.545 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.545 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.545 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.804 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.804 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.804 00:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.804 00:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.804 00:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.804 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.804 { 00:21:16.804 "cntlid": 73, 00:21:16.804 "qid": 0, 00:21:16.804 "state": "enabled", 00:21:16.804 "thread": "nvmf_tgt_poll_group_000", 00:21:16.804 "listen_address": { 00:21:16.804 "trtype": "TCP", 00:21:16.804 "adrfam": "IPv4", 00:21:16.804 "traddr": "10.0.0.2", 00:21:16.804 "trsvcid": "4420" 00:21:16.804 }, 00:21:16.804 "peer_address": { 00:21:16.804 "trtype": "TCP", 00:21:16.804 "adrfam": "IPv4", 00:21:16.804 "traddr": "10.0.0.1", 00:21:16.804 "trsvcid": "45822" 00:21:16.804 }, 00:21:16.804 "auth": { 00:21:16.804 "state": "completed", 00:21:16.804 "digest": "sha384", 00:21:16.804 "dhgroup": "ffdhe4096" 00:21:16.804 } 00:21:16.804 } 00:21:16.804 ]' 00:21:16.804 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.804 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.804 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.804 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.804 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.804 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.804 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.805 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.063 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:21:17.629 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.629 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:17.629 00:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.629 00:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.629 00:46:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.629 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.629 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.629 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.629 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:17.629 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.629 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:17.629 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:17.629 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:17.629 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.629 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.629 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.629 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.888 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.888 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.888 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.888 00:21:18.147 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.147 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.147 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.147 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.147 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.147 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.147 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.147 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.147 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.147 { 00:21:18.147 "cntlid": 75, 00:21:18.147 "qid": 0, 00:21:18.147 "state": "enabled", 00:21:18.147 "thread": "nvmf_tgt_poll_group_000", 00:21:18.147 "listen_address": { 00:21:18.147 "trtype": "TCP", 00:21:18.147 "adrfam": "IPv4", 00:21:18.147 "traddr": "10.0.0.2", 00:21:18.147 "trsvcid": "4420" 00:21:18.147 }, 00:21:18.147 "peer_address": { 00:21:18.147 "trtype": "TCP", 00:21:18.147 "adrfam": "IPv4", 00:21:18.147 "traddr": "10.0.0.1", 00:21:18.147 "trsvcid": "45844" 00:21:18.147 }, 00:21:18.147 "auth": { 00:21:18.147 "state": "completed", 00:21:18.147 "digest": "sha384", 00:21:18.147 "dhgroup": "ffdhe4096" 00:21:18.147 } 00:21:18.147 } 00:21:18.147 ]' 00:21:18.147 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.147 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.147 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.406 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.406 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.406 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.406 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.406 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.406 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:21:18.972 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.230 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:19.230 00:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.231 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.489 00:21:19.489 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.489 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.489 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.782 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.782 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.782 00:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.782 00:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.782 00:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.782 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.782 { 00:21:19.782 "cntlid": 77, 00:21:19.782 "qid": 0, 00:21:19.782 "state": "enabled", 00:21:19.782 "thread": "nvmf_tgt_poll_group_000", 00:21:19.782 "listen_address": { 00:21:19.782 "trtype": "TCP", 00:21:19.782 "adrfam": "IPv4", 00:21:19.782 "traddr": "10.0.0.2", 00:21:19.782 "trsvcid": "4420" 00:21:19.782 }, 00:21:19.782 "peer_address": { 00:21:19.782 "trtype": "TCP", 00:21:19.782 "adrfam": "IPv4", 00:21:19.782 "traddr": "10.0.0.1", 00:21:19.782 "trsvcid": "45870" 00:21:19.782 }, 00:21:19.782 "auth": { 00:21:19.782 "state": "completed", 00:21:19.782 "digest": "sha384", 00:21:19.782 "dhgroup": "ffdhe4096" 00:21:19.782 } 00:21:19.782 } 00:21:19.782 ]' 00:21:19.782 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.782 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.782 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.782 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.782 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.782 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.782 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.782 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.041 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:21:20.609 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.609 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:20.609 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.609 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.609 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.609 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.609 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:20.609 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:20.869 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:20.869 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.869 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:20.869 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:20.869 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:20.869 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.869 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:20.869 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.869 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.869 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.869 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.869 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.128 00:21:21.128 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.128 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.128 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.386 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.386 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.386 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.386 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.386 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.386 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.386 { 00:21:21.386 "cntlid": 79, 00:21:21.386 "qid": 0, 00:21:21.386 "state": "enabled", 00:21:21.386 "thread": "nvmf_tgt_poll_group_000", 00:21:21.386 "listen_address": { 00:21:21.386 "trtype": "TCP", 00:21:21.386 "adrfam": "IPv4", 00:21:21.386 "traddr": "10.0.0.2", 00:21:21.386 "trsvcid": "4420" 00:21:21.386 }, 00:21:21.386 "peer_address": { 00:21:21.386 "trtype": "TCP", 00:21:21.386 "adrfam": "IPv4", 00:21:21.386 "traddr": "10.0.0.1", 00:21:21.386 "trsvcid": "45892" 00:21:21.386 }, 00:21:21.386 "auth": { 00:21:21.386 "state": "completed", 00:21:21.386 "digest": "sha384", 00:21:21.386 "dhgroup": "ffdhe4096" 00:21:21.386 } 00:21:21.386 } 00:21:21.386 ]' 00:21:21.386 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.386 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.386 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.386 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.386 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.386 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.386 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.386 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.644 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:21:22.208 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.208 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:22.208 00:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.208 00:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.208 00:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.208 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.208 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.208 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:22.208 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:22.466 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:22.466 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.466 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:22.466 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:22.466 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:22.466 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.466 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.466 00:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.466 00:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.466 00:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.466 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.466 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.724 00:21:22.724 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.724 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.724 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.983 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.983 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.983 00:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.983 00:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.983 00:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.983 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.983 { 00:21:22.983 "cntlid": 81, 00:21:22.983 "qid": 0, 00:21:22.983 "state": "enabled", 00:21:22.983 "thread": "nvmf_tgt_poll_group_000", 00:21:22.983 "listen_address": { 00:21:22.983 "trtype": "TCP", 00:21:22.983 "adrfam": "IPv4", 00:21:22.983 "traddr": "10.0.0.2", 00:21:22.983 "trsvcid": "4420" 00:21:22.983 }, 00:21:22.983 "peer_address": { 00:21:22.983 "trtype": "TCP", 00:21:22.983 "adrfam": "IPv4", 00:21:22.983 "traddr": "10.0.0.1", 00:21:22.983 "trsvcid": "45928" 00:21:22.983 }, 00:21:22.983 "auth": { 00:21:22.983 "state": "completed", 00:21:22.983 "digest": "sha384", 00:21:22.983 "dhgroup": "ffdhe6144" 00:21:22.983 } 00:21:22.983 } 00:21:22.983 ]' 00:21:22.983 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.983 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.983 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.983 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.983 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.983 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.983 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.983 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.242 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:21:23.809 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.809 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:23.809 00:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.809 00:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.809 00:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.809 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.809 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.809 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:23.809 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:23.809 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.809 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:23.809 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:23.809 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:23.809 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.068 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.068 00:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.068 00:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.068 00:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.068 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.068 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.327 00:21:24.327 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.327 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.327 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.585 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.585 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.585 00:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.585 00:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.585 00:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.585 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.585 { 00:21:24.585 "cntlid": 83, 00:21:24.585 "qid": 0, 00:21:24.585 "state": "enabled", 00:21:24.585 "thread": "nvmf_tgt_poll_group_000", 00:21:24.585 "listen_address": { 00:21:24.585 "trtype": "TCP", 00:21:24.585 "adrfam": "IPv4", 00:21:24.585 "traddr": "10.0.0.2", 00:21:24.585 "trsvcid": "4420" 00:21:24.585 }, 00:21:24.586 "peer_address": { 00:21:24.586 "trtype": "TCP", 00:21:24.586 "adrfam": "IPv4", 00:21:24.586 "traddr": "10.0.0.1", 00:21:24.586 "trsvcid": "45966" 00:21:24.586 }, 00:21:24.586 "auth": { 00:21:24.586 "state": "completed", 00:21:24.586 "digest": "sha384", 00:21:24.586 "dhgroup": "ffdhe6144" 00:21:24.586 } 00:21:24.586 } 00:21:24.586 ]' 00:21:24.586 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.586 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.586 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.586 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.586 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.586 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.586 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.586 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.844 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:21:25.410 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.410 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:25.410 00:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.410 00:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.410 00:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.410 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.410 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.410 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.667 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:25.668 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.668 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:25.668 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:25.668 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:25.668 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.668 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.668 00:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.668 00:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.668 00:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.668 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.668 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.925 00:21:25.925 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.925 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.925 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.183 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.183 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.183 00:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.183 00:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.183 00:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.183 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.183 { 00:21:26.183 "cntlid": 85, 00:21:26.183 "qid": 0, 00:21:26.183 "state": "enabled", 00:21:26.183 "thread": "nvmf_tgt_poll_group_000", 00:21:26.183 "listen_address": { 00:21:26.183 "trtype": "TCP", 00:21:26.183 "adrfam": "IPv4", 00:21:26.183 "traddr": "10.0.0.2", 00:21:26.183 "trsvcid": "4420" 00:21:26.183 }, 00:21:26.183 "peer_address": { 00:21:26.183 "trtype": "TCP", 00:21:26.183 "adrfam": "IPv4", 00:21:26.183 "traddr": "10.0.0.1", 00:21:26.183 "trsvcid": "37424" 00:21:26.183 }, 00:21:26.183 "auth": { 00:21:26.183 "state": "completed", 00:21:26.183 "digest": "sha384", 00:21:26.183 "dhgroup": "ffdhe6144" 00:21:26.183 } 00:21:26.183 } 00:21:26.183 ]' 00:21:26.183 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.183 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.183 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.183 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.183 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.183 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.183 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.183 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.441 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:21:27.006 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.006 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.006 00:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.006 00:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.006 00:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.006 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.006 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:27.006 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:27.263 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:27.263 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:27.263 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:27.263 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:27.263 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:27.263 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.263 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:27.263 00:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.263 00:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.263 00:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.263 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.263 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.522 00:21:27.522 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.522 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.522 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.779 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.779 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.779 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.779 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.779 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.779 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.779 { 00:21:27.779 "cntlid": 87, 00:21:27.779 "qid": 0, 00:21:27.779 "state": "enabled", 00:21:27.779 "thread": "nvmf_tgt_poll_group_000", 00:21:27.779 "listen_address": { 00:21:27.780 "trtype": "TCP", 00:21:27.780 "adrfam": "IPv4", 00:21:27.780 "traddr": "10.0.0.2", 00:21:27.780 "trsvcid": "4420" 00:21:27.780 }, 00:21:27.780 "peer_address": { 00:21:27.780 "trtype": "TCP", 00:21:27.780 "adrfam": "IPv4", 00:21:27.780 "traddr": "10.0.0.1", 00:21:27.780 "trsvcid": "37452" 00:21:27.780 }, 00:21:27.780 "auth": { 00:21:27.780 "state": "completed", 00:21:27.780 "digest": "sha384", 00:21:27.780 "dhgroup": "ffdhe6144" 00:21:27.780 } 00:21:27.780 } 00:21:27.780 ]' 00:21:27.780 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.780 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.780 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.780 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.780 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.780 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.780 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.780 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.037 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:21:28.603 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.603 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:28.603 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.603 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.603 00:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.603 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.603 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.603 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.604 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.861 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:28.861 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.861 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:28.861 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:28.861 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:28.861 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.861 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.861 00:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.861 00:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.861 00:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.861 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.861 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.120 00:21:29.120 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.120 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.378 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.378 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.378 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.378 00:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.378 00:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.378 00:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.378 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.378 { 00:21:29.378 "cntlid": 89, 00:21:29.378 "qid": 0, 00:21:29.378 "state": "enabled", 00:21:29.378 "thread": "nvmf_tgt_poll_group_000", 00:21:29.378 "listen_address": { 00:21:29.378 "trtype": "TCP", 00:21:29.378 "adrfam": "IPv4", 00:21:29.378 "traddr": "10.0.0.2", 00:21:29.378 "trsvcid": "4420" 00:21:29.378 }, 00:21:29.378 "peer_address": { 00:21:29.378 "trtype": "TCP", 00:21:29.378 "adrfam": "IPv4", 00:21:29.378 "traddr": "10.0.0.1", 00:21:29.378 "trsvcid": "37466" 00:21:29.378 }, 00:21:29.378 "auth": { 00:21:29.378 "state": "completed", 00:21:29.378 "digest": "sha384", 00:21:29.378 "dhgroup": "ffdhe8192" 00:21:29.378 } 00:21:29.378 } 00:21:29.378 ]' 00:21:29.378 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.378 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.379 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.637 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.637 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.637 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.637 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.637 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.637 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:21:30.203 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.203 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:30.203 00:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.203 00:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.203 00:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.203 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.203 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.203 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.462 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:30.462 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.462 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:30.462 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:30.462 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:30.462 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.462 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.462 00:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.462 00:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.462 00:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.462 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.462 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.056 00:21:31.056 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.056 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.056 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.056 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.056 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.056 00:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.056 00:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.056 00:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.056 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.056 { 00:21:31.056 "cntlid": 91, 00:21:31.056 "qid": 0, 00:21:31.056 "state": "enabled", 00:21:31.056 "thread": "nvmf_tgt_poll_group_000", 00:21:31.056 "listen_address": { 00:21:31.056 "trtype": "TCP", 00:21:31.056 "adrfam": "IPv4", 00:21:31.056 "traddr": "10.0.0.2", 00:21:31.056 "trsvcid": "4420" 00:21:31.056 }, 00:21:31.056 "peer_address": { 00:21:31.056 "trtype": "TCP", 00:21:31.056 "adrfam": "IPv4", 00:21:31.056 "traddr": "10.0.0.1", 00:21:31.056 "trsvcid": "37494" 00:21:31.056 }, 00:21:31.056 "auth": { 00:21:31.056 "state": "completed", 00:21:31.056 "digest": "sha384", 00:21:31.056 "dhgroup": "ffdhe8192" 00:21:31.056 } 00:21:31.056 } 00:21:31.056 ]' 00:21:31.315 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.315 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.315 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.315 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.315 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.315 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.315 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.315 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.573 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.139 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.706 00:21:32.706 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.706 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.706 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.964 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.964 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.964 00:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.964 00:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.964 00:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.964 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.964 { 00:21:32.964 "cntlid": 93, 00:21:32.964 "qid": 0, 00:21:32.964 "state": "enabled", 00:21:32.965 "thread": "nvmf_tgt_poll_group_000", 00:21:32.965 "listen_address": { 00:21:32.965 "trtype": "TCP", 00:21:32.965 "adrfam": "IPv4", 00:21:32.965 "traddr": "10.0.0.2", 00:21:32.965 "trsvcid": "4420" 00:21:32.965 }, 00:21:32.965 "peer_address": { 00:21:32.965 "trtype": "TCP", 00:21:32.965 "adrfam": "IPv4", 00:21:32.965 "traddr": "10.0.0.1", 00:21:32.965 "trsvcid": "37514" 00:21:32.965 }, 00:21:32.965 "auth": { 00:21:32.965 "state": "completed", 00:21:32.965 "digest": "sha384", 00:21:32.965 "dhgroup": "ffdhe8192" 00:21:32.965 } 00:21:32.965 } 00:21:32.965 ]' 00:21:32.965 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.965 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:32.965 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.965 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.965 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.965 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.965 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.965 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.223 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:21:33.790 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.790 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:33.790 00:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.790 00:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.790 00:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.790 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.790 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.790 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.049 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:34.049 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.049 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:34.049 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:34.049 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:34.049 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.049 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:34.049 00:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.049 00:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.049 00:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.049 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.049 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.617 00:21:34.617 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.617 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.617 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.617 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.617 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.617 00:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.617 00:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.617 00:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.617 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.617 { 00:21:34.617 "cntlid": 95, 00:21:34.617 "qid": 0, 00:21:34.617 "state": "enabled", 00:21:34.617 "thread": "nvmf_tgt_poll_group_000", 00:21:34.617 "listen_address": { 00:21:34.617 "trtype": "TCP", 00:21:34.617 "adrfam": "IPv4", 00:21:34.617 "traddr": "10.0.0.2", 00:21:34.617 "trsvcid": "4420" 00:21:34.617 }, 00:21:34.617 "peer_address": { 00:21:34.617 "trtype": "TCP", 00:21:34.617 "adrfam": "IPv4", 00:21:34.617 "traddr": "10.0.0.1", 00:21:34.617 "trsvcid": "37542" 00:21:34.617 }, 00:21:34.617 "auth": { 00:21:34.617 "state": "completed", 00:21:34.617 "digest": "sha384", 00:21:34.617 "dhgroup": "ffdhe8192" 00:21:34.617 } 00:21:34.617 } 00:21:34.617 ]' 00:21:34.617 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.876 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.876 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.876 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:34.876 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.876 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.876 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.876 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.135 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.702 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.961 00:21:35.961 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.961 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.961 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.221 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.221 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.221 00:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.221 00:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.221 00:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.221 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.221 { 00:21:36.221 "cntlid": 97, 00:21:36.221 "qid": 0, 00:21:36.221 "state": "enabled", 00:21:36.221 "thread": "nvmf_tgt_poll_group_000", 00:21:36.221 "listen_address": { 00:21:36.221 "trtype": "TCP", 00:21:36.221 "adrfam": "IPv4", 00:21:36.221 "traddr": "10.0.0.2", 00:21:36.221 "trsvcid": "4420" 00:21:36.221 }, 00:21:36.221 "peer_address": { 00:21:36.221 "trtype": "TCP", 00:21:36.221 "adrfam": "IPv4", 00:21:36.221 "traddr": "10.0.0.1", 00:21:36.221 "trsvcid": "32872" 00:21:36.221 }, 00:21:36.221 "auth": { 00:21:36.221 "state": "completed", 00:21:36.221 "digest": "sha512", 00:21:36.221 "dhgroup": "null" 00:21:36.221 } 00:21:36.221 } 00:21:36.221 ]' 00:21:36.221 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.221 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.221 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.221 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:36.221 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.480 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.480 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.480 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.480 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:21:37.048 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.048 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:37.048 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.048 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.048 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.048 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.048 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:37.048 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:37.307 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:37.307 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.307 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.307 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:37.307 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:37.307 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.307 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.307 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.307 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.307 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.307 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.307 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.566 00:21:37.566 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.566 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.566 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.825 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.825 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.825 00:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.825 00:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.825 00:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.825 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.825 { 00:21:37.825 "cntlid": 99, 00:21:37.825 "qid": 0, 00:21:37.825 "state": "enabled", 00:21:37.825 "thread": "nvmf_tgt_poll_group_000", 00:21:37.825 "listen_address": { 00:21:37.825 "trtype": "TCP", 00:21:37.825 "adrfam": "IPv4", 00:21:37.825 "traddr": "10.0.0.2", 00:21:37.825 "trsvcid": "4420" 00:21:37.825 }, 00:21:37.825 "peer_address": { 00:21:37.825 "trtype": "TCP", 00:21:37.825 "adrfam": "IPv4", 00:21:37.825 "traddr": "10.0.0.1", 00:21:37.825 "trsvcid": "32902" 00:21:37.825 }, 00:21:37.825 "auth": { 00:21:37.825 "state": "completed", 00:21:37.825 "digest": "sha512", 00:21:37.825 "dhgroup": "null" 00:21:37.825 } 00:21:37.825 } 00:21:37.826 ]' 00:21:37.826 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.826 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.826 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.826 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:37.826 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.826 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.826 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.826 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.084 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:21:38.652 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.653 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:38.653 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.653 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.653 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.653 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.653 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.653 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:38.913 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:38.913 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.913 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.913 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:38.913 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:38.913 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.913 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.913 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.913 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.913 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.913 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.913 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.172 00:21:39.172 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.172 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.172 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.172 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.172 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.172 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.172 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.172 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.172 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.172 { 00:21:39.172 "cntlid": 101, 00:21:39.172 "qid": 0, 00:21:39.172 "state": "enabled", 00:21:39.172 "thread": "nvmf_tgt_poll_group_000", 00:21:39.172 "listen_address": { 00:21:39.172 "trtype": "TCP", 00:21:39.172 "adrfam": "IPv4", 00:21:39.172 "traddr": "10.0.0.2", 00:21:39.172 "trsvcid": "4420" 00:21:39.172 }, 00:21:39.172 "peer_address": { 00:21:39.172 "trtype": "TCP", 00:21:39.172 "adrfam": "IPv4", 00:21:39.172 "traddr": "10.0.0.1", 00:21:39.172 "trsvcid": "32914" 00:21:39.172 }, 00:21:39.172 "auth": { 00:21:39.172 "state": "completed", 00:21:39.172 "digest": "sha512", 00:21:39.172 "dhgroup": "null" 00:21:39.172 } 00:21:39.172 } 00:21:39.172 ]' 00:21:39.172 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.432 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.432 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.432 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:39.432 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.432 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.432 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.432 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.691 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.278 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.594 00:21:40.594 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.594 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.594 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.853 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.853 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.853 00:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.853 00:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.853 00:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.853 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.853 { 00:21:40.853 "cntlid": 103, 00:21:40.853 "qid": 0, 00:21:40.853 "state": "enabled", 00:21:40.853 "thread": "nvmf_tgt_poll_group_000", 00:21:40.853 "listen_address": { 00:21:40.853 "trtype": "TCP", 00:21:40.853 "adrfam": "IPv4", 00:21:40.853 "traddr": "10.0.0.2", 00:21:40.853 "trsvcid": "4420" 00:21:40.853 }, 00:21:40.853 "peer_address": { 00:21:40.853 "trtype": "TCP", 00:21:40.853 "adrfam": "IPv4", 00:21:40.853 "traddr": "10.0.0.1", 00:21:40.853 "trsvcid": "32938" 00:21:40.853 }, 00:21:40.853 "auth": { 00:21:40.853 "state": "completed", 00:21:40.853 "digest": "sha512", 00:21:40.853 "dhgroup": "null" 00:21:40.853 } 00:21:40.853 } 00:21:40.853 ]' 00:21:40.853 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.853 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.853 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.853 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:40.853 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.853 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.853 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.853 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.112 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:21:41.679 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.679 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:41.679 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.679 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.679 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.679 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.679 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.679 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:41.679 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:41.936 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:41.936 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.936 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:41.937 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:41.937 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.937 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.937 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.937 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.937 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.937 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.937 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.937 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.195 00:21:42.195 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.195 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.195 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.195 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.195 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.195 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.195 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.454 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.454 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.454 { 00:21:42.454 "cntlid": 105, 00:21:42.454 "qid": 0, 00:21:42.454 "state": "enabled", 00:21:42.454 "thread": "nvmf_tgt_poll_group_000", 00:21:42.454 "listen_address": { 00:21:42.454 "trtype": "TCP", 00:21:42.454 "adrfam": "IPv4", 00:21:42.454 "traddr": "10.0.0.2", 00:21:42.454 "trsvcid": "4420" 00:21:42.454 }, 00:21:42.454 "peer_address": { 00:21:42.454 "trtype": "TCP", 00:21:42.454 "adrfam": "IPv4", 00:21:42.454 "traddr": "10.0.0.1", 00:21:42.454 "trsvcid": "32974" 00:21:42.454 }, 00:21:42.454 "auth": { 00:21:42.454 "state": "completed", 00:21:42.454 "digest": "sha512", 00:21:42.454 "dhgroup": "ffdhe2048" 00:21:42.454 } 00:21:42.454 } 00:21:42.454 ]' 00:21:42.454 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.454 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.454 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.454 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:42.454 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.454 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.454 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.454 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.713 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.279 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.537 00:21:43.537 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.537 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.537 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.796 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.796 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.796 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.796 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.796 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.796 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.796 { 00:21:43.796 "cntlid": 107, 00:21:43.796 "qid": 0, 00:21:43.796 "state": "enabled", 00:21:43.796 "thread": "nvmf_tgt_poll_group_000", 00:21:43.796 "listen_address": { 00:21:43.796 "trtype": "TCP", 00:21:43.796 "adrfam": "IPv4", 00:21:43.796 "traddr": "10.0.0.2", 00:21:43.796 "trsvcid": "4420" 00:21:43.796 }, 00:21:43.796 "peer_address": { 00:21:43.796 "trtype": "TCP", 00:21:43.796 "adrfam": "IPv4", 00:21:43.796 "traddr": "10.0.0.1", 00:21:43.796 "trsvcid": "33004" 00:21:43.796 }, 00:21:43.796 "auth": { 00:21:43.796 "state": "completed", 00:21:43.796 "digest": "sha512", 00:21:43.796 "dhgroup": "ffdhe2048" 00:21:43.796 } 00:21:43.796 } 00:21:43.796 ]' 00:21:43.796 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.796 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.796 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.796 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:43.796 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.054 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.054 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.054 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.054 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:21:44.617 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.618 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:44.618 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.618 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.618 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.618 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.618 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.618 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.875 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:44.875 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.875 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.875 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:44.875 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:44.875 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.875 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.875 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.875 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.875 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.875 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.875 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.134 00:21:45.134 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.134 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.134 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.392 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.392 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.392 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.392 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.392 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.392 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.392 { 00:21:45.392 "cntlid": 109, 00:21:45.392 "qid": 0, 00:21:45.392 "state": "enabled", 00:21:45.392 "thread": "nvmf_tgt_poll_group_000", 00:21:45.392 "listen_address": { 00:21:45.392 "trtype": "TCP", 00:21:45.392 "adrfam": "IPv4", 00:21:45.392 "traddr": "10.0.0.2", 00:21:45.392 "trsvcid": "4420" 00:21:45.392 }, 00:21:45.392 "peer_address": { 00:21:45.392 "trtype": "TCP", 00:21:45.392 "adrfam": "IPv4", 00:21:45.392 "traddr": "10.0.0.1", 00:21:45.392 "trsvcid": "33024" 00:21:45.392 }, 00:21:45.392 "auth": { 00:21:45.392 "state": "completed", 00:21:45.392 "digest": "sha512", 00:21:45.392 "dhgroup": "ffdhe2048" 00:21:45.392 } 00:21:45.392 } 00:21:45.392 ]' 00:21:45.392 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.392 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.392 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.392 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:45.392 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.392 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.392 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.392 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.650 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:21:46.217 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.217 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.217 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.217 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.217 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.217 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.217 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:46.217 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:46.476 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:46.476 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.476 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:46.476 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:46.476 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:46.476 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.476 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:46.476 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.476 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.476 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.476 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.476 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.734 00:21:46.734 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.734 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.734 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.734 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.734 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.734 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.734 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.734 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.734 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.734 { 00:21:46.734 "cntlid": 111, 00:21:46.734 "qid": 0, 00:21:46.734 "state": "enabled", 00:21:46.734 "thread": "nvmf_tgt_poll_group_000", 00:21:46.734 "listen_address": { 00:21:46.734 "trtype": "TCP", 00:21:46.734 "adrfam": "IPv4", 00:21:46.734 "traddr": "10.0.0.2", 00:21:46.734 "trsvcid": "4420" 00:21:46.734 }, 00:21:46.734 "peer_address": { 00:21:46.734 "trtype": "TCP", 00:21:46.734 "adrfam": "IPv4", 00:21:46.734 "traddr": "10.0.0.1", 00:21:46.734 "trsvcid": "59458" 00:21:46.734 }, 00:21:46.734 "auth": { 00:21:46.734 "state": "completed", 00:21:46.734 "digest": "sha512", 00:21:46.734 "dhgroup": "ffdhe2048" 00:21:46.734 } 00:21:46.734 } 00:21:46.734 ]' 00:21:46.734 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.734 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.734 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.993 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:46.993 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.993 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.993 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.993 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.251 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.819 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.077 00:21:48.077 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.077 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.077 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.335 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.335 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.335 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.335 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.335 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.335 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.335 { 00:21:48.335 "cntlid": 113, 00:21:48.335 "qid": 0, 00:21:48.335 "state": "enabled", 00:21:48.335 "thread": "nvmf_tgt_poll_group_000", 00:21:48.335 "listen_address": { 00:21:48.335 "trtype": "TCP", 00:21:48.335 "adrfam": "IPv4", 00:21:48.335 "traddr": "10.0.0.2", 00:21:48.335 "trsvcid": "4420" 00:21:48.335 }, 00:21:48.335 "peer_address": { 00:21:48.335 "trtype": "TCP", 00:21:48.335 "adrfam": "IPv4", 00:21:48.335 "traddr": "10.0.0.1", 00:21:48.335 "trsvcid": "59480" 00:21:48.335 }, 00:21:48.335 "auth": { 00:21:48.335 "state": "completed", 00:21:48.335 "digest": "sha512", 00:21:48.335 "dhgroup": "ffdhe3072" 00:21:48.335 } 00:21:48.335 } 00:21:48.335 ]' 00:21:48.335 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.335 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.335 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.335 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:48.335 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.594 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.594 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.594 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.594 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:21:49.162 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.162 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:49.162 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.162 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.162 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.162 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.162 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:49.162 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:49.421 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:49.421 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.421 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:49.421 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:49.421 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:49.421 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.421 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.421 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.421 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.421 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.421 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.421 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.680 00:21:49.680 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.680 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.680 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.939 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.939 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.939 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.939 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.939 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.939 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.939 { 00:21:49.939 "cntlid": 115, 00:21:49.939 "qid": 0, 00:21:49.939 "state": "enabled", 00:21:49.939 "thread": "nvmf_tgt_poll_group_000", 00:21:49.939 "listen_address": { 00:21:49.939 "trtype": "TCP", 00:21:49.939 "adrfam": "IPv4", 00:21:49.939 "traddr": "10.0.0.2", 00:21:49.939 "trsvcid": "4420" 00:21:49.939 }, 00:21:49.939 "peer_address": { 00:21:49.939 "trtype": "TCP", 00:21:49.939 "adrfam": "IPv4", 00:21:49.939 "traddr": "10.0.0.1", 00:21:49.939 "trsvcid": "59508" 00:21:49.939 }, 00:21:49.939 "auth": { 00:21:49.939 "state": "completed", 00:21:49.939 "digest": "sha512", 00:21:49.939 "dhgroup": "ffdhe3072" 00:21:49.939 } 00:21:49.939 } 00:21:49.939 ]' 00:21:49.939 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.939 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.939 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.939 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:49.939 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.939 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.939 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.939 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.198 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.765 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.049 00:21:51.049 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.049 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.049 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.308 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.308 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.308 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.308 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.308 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.308 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.308 { 00:21:51.308 "cntlid": 117, 00:21:51.308 "qid": 0, 00:21:51.308 "state": "enabled", 00:21:51.308 "thread": "nvmf_tgt_poll_group_000", 00:21:51.308 "listen_address": { 00:21:51.308 "trtype": "TCP", 00:21:51.308 "adrfam": "IPv4", 00:21:51.308 "traddr": "10.0.0.2", 00:21:51.308 "trsvcid": "4420" 00:21:51.308 }, 00:21:51.308 "peer_address": { 00:21:51.308 "trtype": "TCP", 00:21:51.308 "adrfam": "IPv4", 00:21:51.308 "traddr": "10.0.0.1", 00:21:51.308 "trsvcid": "59542" 00:21:51.308 }, 00:21:51.308 "auth": { 00:21:51.308 "state": "completed", 00:21:51.308 "digest": "sha512", 00:21:51.308 "dhgroup": "ffdhe3072" 00:21:51.308 } 00:21:51.308 } 00:21:51.308 ]' 00:21:51.308 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.308 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.308 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.308 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:51.308 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.308 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.308 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.308 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.567 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:21:52.134 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.134 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.134 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.134 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.134 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.134 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.134 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:52.134 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:52.393 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:52.393 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.393 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:52.393 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:52.393 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:52.393 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.393 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:52.393 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.393 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.393 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.393 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.393 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.651 00:21:52.651 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.651 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.651 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.910 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.910 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.911 00:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.911 00:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.911 00:47:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.911 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.911 { 00:21:52.911 "cntlid": 119, 00:21:52.911 "qid": 0, 00:21:52.911 "state": "enabled", 00:21:52.911 "thread": "nvmf_tgt_poll_group_000", 00:21:52.911 "listen_address": { 00:21:52.911 "trtype": "TCP", 00:21:52.911 "adrfam": "IPv4", 00:21:52.911 "traddr": "10.0.0.2", 00:21:52.911 "trsvcid": "4420" 00:21:52.911 }, 00:21:52.911 "peer_address": { 00:21:52.911 "trtype": "TCP", 00:21:52.911 "adrfam": "IPv4", 00:21:52.911 "traddr": "10.0.0.1", 00:21:52.911 "trsvcid": "59580" 00:21:52.911 }, 00:21:52.911 "auth": { 00:21:52.911 "state": "completed", 00:21:52.911 "digest": "sha512", 00:21:52.911 "dhgroup": "ffdhe3072" 00:21:52.911 } 00:21:52.911 } 00:21:52.911 ]' 00:21:52.911 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.911 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.911 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.911 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:52.911 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.911 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.911 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.911 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.169 00:47:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.737 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.998 00:21:53.998 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.998 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.998 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.257 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.257 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.257 00:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.257 00:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.257 00:47:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.257 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.257 { 00:21:54.257 "cntlid": 121, 00:21:54.257 "qid": 0, 00:21:54.257 "state": "enabled", 00:21:54.257 "thread": "nvmf_tgt_poll_group_000", 00:21:54.257 "listen_address": { 00:21:54.257 "trtype": "TCP", 00:21:54.257 "adrfam": "IPv4", 00:21:54.257 "traddr": "10.0.0.2", 00:21:54.257 "trsvcid": "4420" 00:21:54.257 }, 00:21:54.257 "peer_address": { 00:21:54.257 "trtype": "TCP", 00:21:54.257 "adrfam": "IPv4", 00:21:54.257 "traddr": "10.0.0.1", 00:21:54.257 "trsvcid": "59610" 00:21:54.257 }, 00:21:54.257 "auth": { 00:21:54.257 "state": "completed", 00:21:54.257 "digest": "sha512", 00:21:54.257 "dhgroup": "ffdhe4096" 00:21:54.257 } 00:21:54.257 } 00:21:54.257 ]' 00:21:54.257 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.257 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.257 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.517 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:54.517 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.517 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.517 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.517 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.517 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:21:55.085 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.085 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.085 00:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.085 00:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.344 00:47:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.603 00:21:55.603 00:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.603 00:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.603 00:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.862 00:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.862 00:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.862 00:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.862 00:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.862 00:47:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.862 00:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.862 { 00:21:55.862 "cntlid": 123, 00:21:55.862 "qid": 0, 00:21:55.862 "state": "enabled", 00:21:55.862 "thread": "nvmf_tgt_poll_group_000", 00:21:55.862 "listen_address": { 00:21:55.862 "trtype": "TCP", 00:21:55.862 "adrfam": "IPv4", 00:21:55.862 "traddr": "10.0.0.2", 00:21:55.862 "trsvcid": "4420" 00:21:55.862 }, 00:21:55.862 "peer_address": { 00:21:55.862 "trtype": "TCP", 00:21:55.862 "adrfam": "IPv4", 00:21:55.862 "traddr": "10.0.0.1", 00:21:55.862 "trsvcid": "47100" 00:21:55.862 }, 00:21:55.862 "auth": { 00:21:55.862 "state": "completed", 00:21:55.862 "digest": "sha512", 00:21:55.862 "dhgroup": "ffdhe4096" 00:21:55.862 } 00:21:55.862 } 00:21:55.862 ]' 00:21:55.862 00:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.862 00:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.862 00:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.862 00:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:55.862 00:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.862 00:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.862 00:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.862 00:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.121 00:47:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:21:56.690 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.690 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:56.690 00:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.690 00:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.690 00:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.690 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.690 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:56.690 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:56.949 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:56.949 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.949 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.949 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:56.949 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:56.949 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.949 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.949 00:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.949 00:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.949 00:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.949 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.949 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.207 00:21:57.207 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.207 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.207 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.465 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.465 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.465 00:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.465 00:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.465 00:47:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.465 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.465 { 00:21:57.465 "cntlid": 125, 00:21:57.465 "qid": 0, 00:21:57.465 "state": "enabled", 00:21:57.465 "thread": "nvmf_tgt_poll_group_000", 00:21:57.465 "listen_address": { 00:21:57.465 "trtype": "TCP", 00:21:57.465 "adrfam": "IPv4", 00:21:57.465 "traddr": "10.0.0.2", 00:21:57.465 "trsvcid": "4420" 00:21:57.465 }, 00:21:57.465 "peer_address": { 00:21:57.465 "trtype": "TCP", 00:21:57.465 "adrfam": "IPv4", 00:21:57.465 "traddr": "10.0.0.1", 00:21:57.465 "trsvcid": "47120" 00:21:57.465 }, 00:21:57.465 "auth": { 00:21:57.465 "state": "completed", 00:21:57.465 "digest": "sha512", 00:21:57.465 "dhgroup": "ffdhe4096" 00:21:57.465 } 00:21:57.465 } 00:21:57.465 ]' 00:21:57.465 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.465 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.465 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.465 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:57.465 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.465 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.465 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.465 00:47:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.723 00:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:58.290 00:47:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:58.548 00:21:58.864 00:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.864 00:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.864 00:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.864 00:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.864 00:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.864 00:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.864 00:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.864 00:47:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.864 00:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.864 { 00:21:58.864 "cntlid": 127, 00:21:58.864 "qid": 0, 00:21:58.864 "state": "enabled", 00:21:58.864 "thread": "nvmf_tgt_poll_group_000", 00:21:58.864 "listen_address": { 00:21:58.864 "trtype": "TCP", 00:21:58.864 "adrfam": "IPv4", 00:21:58.864 "traddr": "10.0.0.2", 00:21:58.864 "trsvcid": "4420" 00:21:58.864 }, 00:21:58.864 "peer_address": { 00:21:58.864 "trtype": "TCP", 00:21:58.864 "adrfam": "IPv4", 00:21:58.864 "traddr": "10.0.0.1", 00:21:58.864 "trsvcid": "47150" 00:21:58.864 }, 00:21:58.864 "auth": { 00:21:58.864 "state": "completed", 00:21:58.864 "digest": "sha512", 00:21:58.864 "dhgroup": "ffdhe4096" 00:21:58.864 } 00:21:58.864 } 00:21:58.864 ]' 00:21:58.864 00:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.864 00:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.864 00:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.158 00:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:59.158 00:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.158 00:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.158 00:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.158 00:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.158 00:47:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:21:59.725 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.725 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:59.725 00:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.725 00:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.725 00:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.725 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.725 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.725 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:59.725 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:59.982 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:59.982 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.982 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.982 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:59.982 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:59.982 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.982 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.982 00:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.982 00:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.982 00:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.982 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.982 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.240 00:22:00.240 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.240 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.240 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.498 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.498 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.498 00:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.498 00:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.498 00:47:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.498 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.498 { 00:22:00.498 "cntlid": 129, 00:22:00.498 "qid": 0, 00:22:00.498 "state": "enabled", 00:22:00.498 "thread": "nvmf_tgt_poll_group_000", 00:22:00.498 "listen_address": { 00:22:00.498 "trtype": "TCP", 00:22:00.498 "adrfam": "IPv4", 00:22:00.498 "traddr": "10.0.0.2", 00:22:00.498 "trsvcid": "4420" 00:22:00.498 }, 00:22:00.498 "peer_address": { 00:22:00.498 "trtype": "TCP", 00:22:00.498 "adrfam": "IPv4", 00:22:00.498 "traddr": "10.0.0.1", 00:22:00.498 "trsvcid": "47176" 00:22:00.498 }, 00:22:00.498 "auth": { 00:22:00.498 "state": "completed", 00:22:00.498 "digest": "sha512", 00:22:00.498 "dhgroup": "ffdhe6144" 00:22:00.498 } 00:22:00.498 } 00:22:00.498 ]' 00:22:00.498 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.499 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.499 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.499 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:00.499 00:47:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.499 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.499 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.499 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.778 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:22:01.345 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.345 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.345 00:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.345 00:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.345 00:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.345 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.345 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.345 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.603 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:01.603 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.603 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:01.603 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:01.603 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:01.603 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.603 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.603 00:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.603 00:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.603 00:47:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.603 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.603 00:47:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.861 00:22:01.861 00:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.861 00:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.861 00:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.120 00:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.120 00:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.120 00:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.120 00:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.120 00:47:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.120 00:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.120 { 00:22:02.120 "cntlid": 131, 00:22:02.120 "qid": 0, 00:22:02.120 "state": "enabled", 00:22:02.120 "thread": "nvmf_tgt_poll_group_000", 00:22:02.120 "listen_address": { 00:22:02.120 "trtype": "TCP", 00:22:02.120 "adrfam": "IPv4", 00:22:02.120 "traddr": "10.0.0.2", 00:22:02.120 "trsvcid": "4420" 00:22:02.120 }, 00:22:02.120 "peer_address": { 00:22:02.120 "trtype": "TCP", 00:22:02.120 "adrfam": "IPv4", 00:22:02.120 "traddr": "10.0.0.1", 00:22:02.120 "trsvcid": "47206" 00:22:02.120 }, 00:22:02.120 "auth": { 00:22:02.120 "state": "completed", 00:22:02.120 "digest": "sha512", 00:22:02.120 "dhgroup": "ffdhe6144" 00:22:02.120 } 00:22:02.120 } 00:22:02.120 ]' 00:22:02.120 00:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.120 00:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.120 00:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.120 00:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:02.120 00:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.120 00:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.120 00:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.120 00:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.379 00:47:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:22:02.948 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.948 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.948 00:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.948 00:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.948 00:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.948 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.948 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.948 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.208 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:03.208 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.208 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.208 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:03.208 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:03.208 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.208 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.208 00:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.208 00:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.208 00:47:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.208 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.208 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.467 00:22:03.467 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.467 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.467 00:47:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.725 00:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.725 00:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.725 00:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.725 00:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.725 00:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.725 00:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.725 { 00:22:03.725 "cntlid": 133, 00:22:03.725 "qid": 0, 00:22:03.725 "state": "enabled", 00:22:03.725 "thread": "nvmf_tgt_poll_group_000", 00:22:03.726 "listen_address": { 00:22:03.726 "trtype": "TCP", 00:22:03.726 "adrfam": "IPv4", 00:22:03.726 "traddr": "10.0.0.2", 00:22:03.726 "trsvcid": "4420" 00:22:03.726 }, 00:22:03.726 "peer_address": { 00:22:03.726 "trtype": "TCP", 00:22:03.726 "adrfam": "IPv4", 00:22:03.726 "traddr": "10.0.0.1", 00:22:03.726 "trsvcid": "47236" 00:22:03.726 }, 00:22:03.726 "auth": { 00:22:03.726 "state": "completed", 00:22:03.726 "digest": "sha512", 00:22:03.726 "dhgroup": "ffdhe6144" 00:22:03.726 } 00:22:03.726 } 00:22:03.726 ]' 00:22:03.726 00:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.726 00:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.726 00:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.726 00:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:03.726 00:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.726 00:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.726 00:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.726 00:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.984 00:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:22:04.552 00:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.552 00:47:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.552 00:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.552 00:47:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.552 00:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.552 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:04.552 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:04.552 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:04.811 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:04.811 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.811 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:04.811 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:04.811 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:04.811 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.811 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:04.811 00:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.811 00:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.811 00:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.812 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:04.812 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.070 00:22:05.070 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.070 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.070 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.330 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.330 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.330 00:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.330 00:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.330 00:47:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.330 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.330 { 00:22:05.330 "cntlid": 135, 00:22:05.330 "qid": 0, 00:22:05.330 "state": "enabled", 00:22:05.330 "thread": "nvmf_tgt_poll_group_000", 00:22:05.330 "listen_address": { 00:22:05.330 "trtype": "TCP", 00:22:05.330 "adrfam": "IPv4", 00:22:05.330 "traddr": "10.0.0.2", 00:22:05.330 "trsvcid": "4420" 00:22:05.330 }, 00:22:05.330 "peer_address": { 00:22:05.330 "trtype": "TCP", 00:22:05.330 "adrfam": "IPv4", 00:22:05.330 "traddr": "10.0.0.1", 00:22:05.330 "trsvcid": "47268" 00:22:05.330 }, 00:22:05.330 "auth": { 00:22:05.330 "state": "completed", 00:22:05.330 "digest": "sha512", 00:22:05.330 "dhgroup": "ffdhe6144" 00:22:05.330 } 00:22:05.330 } 00:22:05.330 ]' 00:22:05.330 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.330 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.330 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.330 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:05.330 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.330 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.330 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.330 00:47:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.588 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:22:06.156 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.156 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:06.156 00:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.156 00:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.156 00:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.156 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.156 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.156 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.156 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.415 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:06.415 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:06.415 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:06.415 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:06.415 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:06.415 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.415 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.415 00:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.415 00:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.415 00:47:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.415 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.415 00:47:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.983 00:22:06.983 00:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.983 00:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.983 00:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.983 00:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.983 00:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.983 00:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.983 00:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.983 00:47:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.983 00:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.983 { 00:22:06.983 "cntlid": 137, 00:22:06.983 "qid": 0, 00:22:06.983 "state": "enabled", 00:22:06.983 "thread": "nvmf_tgt_poll_group_000", 00:22:06.983 "listen_address": { 00:22:06.983 "trtype": "TCP", 00:22:06.983 "adrfam": "IPv4", 00:22:06.983 "traddr": "10.0.0.2", 00:22:06.983 "trsvcid": "4420" 00:22:06.983 }, 00:22:06.983 "peer_address": { 00:22:06.983 "trtype": "TCP", 00:22:06.983 "adrfam": "IPv4", 00:22:06.983 "traddr": "10.0.0.1", 00:22:06.983 "trsvcid": "38136" 00:22:06.983 }, 00:22:06.983 "auth": { 00:22:06.983 "state": "completed", 00:22:06.983 "digest": "sha512", 00:22:06.983 "dhgroup": "ffdhe8192" 00:22:06.983 } 00:22:06.983 } 00:22:06.983 ]' 00:22:06.983 00:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.983 00:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.983 00:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.242 00:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:07.242 00:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:07.242 00:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.242 00:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.242 00:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.242 00:47:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:22:07.809 00:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.809 00:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.809 00:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.809 00:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.809 00:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.809 00:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.809 00:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:07.809 00:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:08.068 00:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:08.068 00:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:08.068 00:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:08.068 00:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:08.068 00:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:08.068 00:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.068 00:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.068 00:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.068 00:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.068 00:47:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.068 00:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.068 00:47:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.636 00:22:08.636 00:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.636 00:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.636 00:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.894 00:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.894 00:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.894 00:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.894 00:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.894 00:47:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.894 00:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.894 { 00:22:08.894 "cntlid": 139, 00:22:08.894 "qid": 0, 00:22:08.894 "state": "enabled", 00:22:08.894 "thread": "nvmf_tgt_poll_group_000", 00:22:08.894 "listen_address": { 00:22:08.894 "trtype": "TCP", 00:22:08.894 "adrfam": "IPv4", 00:22:08.894 "traddr": "10.0.0.2", 00:22:08.894 "trsvcid": "4420" 00:22:08.894 }, 00:22:08.894 "peer_address": { 00:22:08.894 "trtype": "TCP", 00:22:08.894 "adrfam": "IPv4", 00:22:08.894 "traddr": "10.0.0.1", 00:22:08.894 "trsvcid": "38160" 00:22:08.894 }, 00:22:08.894 "auth": { 00:22:08.894 "state": "completed", 00:22:08.894 "digest": "sha512", 00:22:08.894 "dhgroup": "ffdhe8192" 00:22:08.894 } 00:22:08.894 } 00:22:08.894 ]' 00:22:08.894 00:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.894 00:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.894 00:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.894 00:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.894 00:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.894 00:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.894 00:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.894 00:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.152 00:47:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDJjMWIyNmUyMWM5MjkyZmI1MjViM2M1NWVlODc3Njao5dBU: --dhchap-ctrl-secret DHHC-1:02:YzVhOTRmNjExMDU0YTNiOWFhOWIzNjYyMjdhYzVlMTIzMTE4YTRhODQ4ZTE1NjExKUFqVw==: 00:22:09.719 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.719 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:09.719 00:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.719 00:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.719 00:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.719 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.719 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:09.719 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:09.977 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:09.977 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.977 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:09.977 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:09.977 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:09.977 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.977 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.977 00:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.977 00:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.977 00:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.977 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.978 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.237 00:22:10.237 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.237 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.237 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.495 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.495 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.495 00:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.495 00:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.495 00:47:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.495 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.495 { 00:22:10.495 "cntlid": 141, 00:22:10.495 "qid": 0, 00:22:10.495 "state": "enabled", 00:22:10.495 "thread": "nvmf_tgt_poll_group_000", 00:22:10.495 "listen_address": { 00:22:10.495 "trtype": "TCP", 00:22:10.495 "adrfam": "IPv4", 00:22:10.495 "traddr": "10.0.0.2", 00:22:10.495 "trsvcid": "4420" 00:22:10.495 }, 00:22:10.495 "peer_address": { 00:22:10.495 "trtype": "TCP", 00:22:10.495 "adrfam": "IPv4", 00:22:10.495 "traddr": "10.0.0.1", 00:22:10.495 "trsvcid": "38170" 00:22:10.495 }, 00:22:10.495 "auth": { 00:22:10.495 "state": "completed", 00:22:10.495 "digest": "sha512", 00:22:10.495 "dhgroup": "ffdhe8192" 00:22:10.496 } 00:22:10.496 } 00:22:10.496 ]' 00:22:10.496 00:47:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.496 00:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.496 00:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.755 00:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:10.755 00:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.755 00:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.755 00:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.755 00:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.755 00:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmRjYmYxNDZjNjQ3OTcyNjNhNWUyY2EyODY0YmZkZmViYzg3ZGVhYjBlY2U3MmEwRZXQew==: --dhchap-ctrl-secret DHHC-1:01:NjQ4YmMxYTE2ZmFhNjhiNDAwZjYxODZjMjVkNDFkYTGqsZjl: 00:22:11.323 00:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.323 00:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.323 00:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.323 00:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.323 00:47:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.323 00:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.323 00:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:11.323 00:47:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:11.581 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:11.581 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:11.581 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:11.581 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:11.581 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:11.581 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.581 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:11.581 00:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.581 00:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.581 00:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.581 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:11.581 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:12.149 00:22:12.149 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.149 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.149 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.406 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.406 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.406 00:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.406 00:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.406 00:47:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.406 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.406 { 00:22:12.406 "cntlid": 143, 00:22:12.406 "qid": 0, 00:22:12.406 "state": "enabled", 00:22:12.406 "thread": "nvmf_tgt_poll_group_000", 00:22:12.406 "listen_address": { 00:22:12.406 "trtype": "TCP", 00:22:12.406 "adrfam": "IPv4", 00:22:12.406 "traddr": "10.0.0.2", 00:22:12.406 "trsvcid": "4420" 00:22:12.406 }, 00:22:12.406 "peer_address": { 00:22:12.406 "trtype": "TCP", 00:22:12.406 "adrfam": "IPv4", 00:22:12.406 "traddr": "10.0.0.1", 00:22:12.406 "trsvcid": "38206" 00:22:12.406 }, 00:22:12.406 "auth": { 00:22:12.406 "state": "completed", 00:22:12.406 "digest": "sha512", 00:22:12.406 "dhgroup": "ffdhe8192" 00:22:12.406 } 00:22:12.406 } 00:22:12.406 ]' 00:22:12.406 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.406 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.406 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.406 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:12.406 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.406 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.406 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.406 00:47:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.664 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:22:13.231 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.231 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:13.231 00:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.231 00:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.231 00:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.231 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:13.231 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:13.231 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:13.231 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:13.231 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:13.231 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:13.489 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:13.489 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.489 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:13.489 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:13.489 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:13.489 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.489 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.489 00:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.489 00:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.489 00:47:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.489 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.489 00:47:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.057 00:22:14.057 00:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.057 00:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.057 00:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.057 00:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.057 00:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.057 00:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.057 00:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.057 00:47:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.057 00:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.057 { 00:22:14.057 "cntlid": 145, 00:22:14.057 "qid": 0, 00:22:14.057 "state": "enabled", 00:22:14.057 "thread": "nvmf_tgt_poll_group_000", 00:22:14.057 "listen_address": { 00:22:14.057 "trtype": "TCP", 00:22:14.057 "adrfam": "IPv4", 00:22:14.057 "traddr": "10.0.0.2", 00:22:14.057 "trsvcid": "4420" 00:22:14.057 }, 00:22:14.057 "peer_address": { 00:22:14.057 "trtype": "TCP", 00:22:14.057 "adrfam": "IPv4", 00:22:14.057 "traddr": "10.0.0.1", 00:22:14.057 "trsvcid": "38224" 00:22:14.057 }, 00:22:14.057 "auth": { 00:22:14.057 "state": "completed", 00:22:14.057 "digest": "sha512", 00:22:14.057 "dhgroup": "ffdhe8192" 00:22:14.057 } 00:22:14.057 } 00:22:14.057 ]' 00:22:14.057 00:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.057 00:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.057 00:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.057 00:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:14.057 00:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.316 00:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.316 00:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.316 00:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.316 00:47:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:M2QxYzkyMzg3M2MxNTQ0Zjc4MzIzMzFkOWViZTJkMWUxMTg1ZjFkODYyZjEyNjIyeOVKrA==: --dhchap-ctrl-secret DHHC-1:03:YzRkNGRlYjAwZGFhYTdmNmJiNDgwZWEzMDNhMDRkZDdjYjBhYWQwMDZjMTRjMDlmNGFjNDU2NzBiYTliYzE4OR1uVzk=: 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:14.884 00:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:15.452 request: 00:22:15.452 { 00:22:15.452 "name": "nvme0", 00:22:15.452 "trtype": "tcp", 00:22:15.452 "traddr": "10.0.0.2", 00:22:15.452 "adrfam": "ipv4", 00:22:15.452 "trsvcid": "4420", 00:22:15.452 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:15.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:15.452 "prchk_reftag": false, 00:22:15.452 "prchk_guard": false, 00:22:15.452 "hdgst": false, 00:22:15.452 "ddgst": false, 00:22:15.452 "dhchap_key": "key2", 00:22:15.452 "method": "bdev_nvme_attach_controller", 00:22:15.452 "req_id": 1 00:22:15.452 } 00:22:15.452 Got JSON-RPC error response 00:22:15.452 response: 00:22:15.452 { 00:22:15.452 "code": -5, 00:22:15.452 "message": "Input/output error" 00:22:15.452 } 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:15.452 00:47:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:16.020 request: 00:22:16.020 { 00:22:16.020 "name": "nvme0", 00:22:16.020 "trtype": "tcp", 00:22:16.020 "traddr": "10.0.0.2", 00:22:16.020 "adrfam": "ipv4", 00:22:16.020 "trsvcid": "4420", 00:22:16.020 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:16.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:16.020 "prchk_reftag": false, 00:22:16.020 "prchk_guard": false, 00:22:16.020 "hdgst": false, 00:22:16.020 "ddgst": false, 00:22:16.020 "dhchap_key": "key1", 00:22:16.020 "dhchap_ctrlr_key": "ckey2", 00:22:16.020 "method": "bdev_nvme_attach_controller", 00:22:16.020 "req_id": 1 00:22:16.020 } 00:22:16.020 Got JSON-RPC error response 00:22:16.020 response: 00:22:16.020 { 00:22:16.020 "code": -5, 00:22:16.020 "message": "Input/output error" 00:22:16.020 } 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.020 00:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.279 request: 00:22:16.279 { 00:22:16.279 "name": "nvme0", 00:22:16.279 "trtype": "tcp", 00:22:16.279 "traddr": "10.0.0.2", 00:22:16.279 "adrfam": "ipv4", 00:22:16.279 "trsvcid": "4420", 00:22:16.279 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:16.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:16.279 "prchk_reftag": false, 00:22:16.279 "prchk_guard": false, 00:22:16.279 "hdgst": false, 00:22:16.279 "ddgst": false, 00:22:16.279 "dhchap_key": "key1", 00:22:16.279 "dhchap_ctrlr_key": "ckey1", 00:22:16.279 "method": "bdev_nvme_attach_controller", 00:22:16.279 "req_id": 1 00:22:16.279 } 00:22:16.279 Got JSON-RPC error response 00:22:16.279 response: 00:22:16.279 { 00:22:16.279 "code": -5, 00:22:16.279 "message": "Input/output error" 00:22:16.279 } 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1409457 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1409457 ']' 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1409457 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1409457 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1409457' 00:22:16.279 killing process with pid 1409457 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1409457 00:22:16.279 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1409457 00:22:16.538 00:47:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:16.538 00:47:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.538 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.538 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.538 00:47:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1429698 00:22:16.538 00:47:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1429698 00:22:16.538 00:47:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:16.538 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1429698 ']' 00:22:16.538 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.538 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.538 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.538 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.538 00:47:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.473 00:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.473 00:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:17.473 00:47:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.473 00:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:17.473 00:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.473 00:47:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.473 00:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:17.473 00:47:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1429698 00:22:17.473 00:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1429698 ']' 00:22:17.473 00:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.473 00:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.473 00:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.473 00:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.473 00:47:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.731 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:18.346 00:22:18.346 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.346 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.346 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.346 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.346 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.346 00:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.346 00:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.346 00:47:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.346 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.346 { 00:22:18.346 "cntlid": 1, 00:22:18.346 "qid": 0, 00:22:18.346 "state": "enabled", 00:22:18.346 "thread": "nvmf_tgt_poll_group_000", 00:22:18.346 "listen_address": { 00:22:18.346 "trtype": "TCP", 00:22:18.346 "adrfam": "IPv4", 00:22:18.346 "traddr": "10.0.0.2", 00:22:18.346 "trsvcid": "4420" 00:22:18.346 }, 00:22:18.346 "peer_address": { 00:22:18.346 "trtype": "TCP", 00:22:18.346 "adrfam": "IPv4", 00:22:18.346 "traddr": "10.0.0.1", 00:22:18.346 "trsvcid": "58372" 00:22:18.346 }, 00:22:18.346 "auth": { 00:22:18.346 "state": "completed", 00:22:18.346 "digest": "sha512", 00:22:18.346 "dhgroup": "ffdhe8192" 00:22:18.346 } 00:22:18.346 } 00:22:18.346 ]' 00:22:18.346 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.346 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.346 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.632 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.632 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.632 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.632 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.632 00:47:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.633 00:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MGU2MzdlNDc5MWUwZmY5MWE4ZGRjY2U5NDkyODJkZTdhODYyNGVjMTQ5Nzg0ODk3NmRmMzQ3M2Q0MWU0Y2VmNtvd6uw=: 00:22:19.199 00:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.199 00:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:19.199 00:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.199 00:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.199 00:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.199 00:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:19.199 00:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.199 00:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.199 00:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.199 00:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:19.199 00:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:19.456 00:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.456 00:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:19.456 00:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.456 00:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:19.457 00:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.457 00:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:19.457 00:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.457 00:47:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.457 00:47:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.715 request: 00:22:19.715 { 00:22:19.715 "name": "nvme0", 00:22:19.715 "trtype": "tcp", 00:22:19.715 "traddr": "10.0.0.2", 00:22:19.715 "adrfam": "ipv4", 00:22:19.715 "trsvcid": "4420", 00:22:19.715 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:19.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:19.715 "prchk_reftag": false, 00:22:19.715 "prchk_guard": false, 00:22:19.715 "hdgst": false, 00:22:19.715 "ddgst": false, 00:22:19.715 "dhchap_key": "key3", 00:22:19.715 "method": "bdev_nvme_attach_controller", 00:22:19.715 "req_id": 1 00:22:19.715 } 00:22:19.715 Got JSON-RPC error response 00:22:19.715 response: 00:22:19.715 { 00:22:19.715 "code": -5, 00:22:19.715 "message": "Input/output error" 00:22:19.715 } 00:22:19.715 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:19.715 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:19.715 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:19.715 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:19.715 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:19.715 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:19.715 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:19.715 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:19.973 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.973 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:19.973 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.973 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:19.973 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.973 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:19.973 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.973 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.973 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.973 request: 00:22:19.973 { 00:22:19.973 "name": "nvme0", 00:22:19.973 "trtype": "tcp", 00:22:19.973 "traddr": "10.0.0.2", 00:22:19.973 "adrfam": "ipv4", 00:22:19.973 "trsvcid": "4420", 00:22:19.973 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:19.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:19.973 "prchk_reftag": false, 00:22:19.973 "prchk_guard": false, 00:22:19.973 "hdgst": false, 00:22:19.973 "ddgst": false, 00:22:19.973 "dhchap_key": "key3", 00:22:19.973 "method": "bdev_nvme_attach_controller", 00:22:19.973 "req_id": 1 00:22:19.973 } 00:22:19.973 Got JSON-RPC error response 00:22:19.973 response: 00:22:19.973 { 00:22:19.973 "code": -5, 00:22:19.973 "message": "Input/output error" 00:22:19.973 } 00:22:19.973 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:19.973 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:19.973 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:19.973 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:19.973 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:19.974 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:19.974 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:19.974 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.974 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.974 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:20.233 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:20.233 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.233 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.233 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.233 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:20.233 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.233 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.233 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.233 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:20.233 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:20.233 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:20.233 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:20.233 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.233 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:20.233 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.234 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:20.234 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:20.491 request: 00:22:20.491 { 00:22:20.491 "name": "nvme0", 00:22:20.491 "trtype": "tcp", 00:22:20.491 "traddr": "10.0.0.2", 00:22:20.491 "adrfam": "ipv4", 00:22:20.491 "trsvcid": "4420", 00:22:20.491 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:20.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:20.491 "prchk_reftag": false, 00:22:20.491 "prchk_guard": false, 00:22:20.491 "hdgst": false, 00:22:20.491 "ddgst": false, 00:22:20.491 "dhchap_key": "key0", 00:22:20.491 "dhchap_ctrlr_key": "key1", 00:22:20.491 "method": "bdev_nvme_attach_controller", 00:22:20.491 "req_id": 1 00:22:20.491 } 00:22:20.491 Got JSON-RPC error response 00:22:20.491 response: 00:22:20.491 { 00:22:20.491 "code": -5, 00:22:20.491 "message": "Input/output error" 00:22:20.491 } 00:22:20.491 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:20.491 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:20.491 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:20.491 00:47:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:20.491 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:20.491 00:47:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:20.749 00:22:20.749 00:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:20.749 00:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:20.749 00:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1409554 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1409554 ']' 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1409554 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1409554 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1409554' 00:22:21.007 killing process with pid 1409554 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1409554 00:22:21.007 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1409554 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:21.572 rmmod nvme_tcp 00:22:21.572 rmmod nvme_fabrics 00:22:21.572 rmmod nvme_keyring 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1429698 ']' 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1429698 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1429698 ']' 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1429698 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1429698 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1429698' 00:22:21.572 killing process with pid 1429698 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1429698 00:22:21.572 00:47:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1429698 00:22:21.830 00:47:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:21.830 00:47:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:21.830 00:47:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:21.830 00:47:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:21.830 00:47:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:21.830 00:47:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.830 00:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.830 00:47:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.736 00:47:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:23.736 00:47:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Zef /tmp/spdk.key-sha256.BtS /tmp/spdk.key-sha384.pat /tmp/spdk.key-sha512.CaU /tmp/spdk.key-sha512.x4K /tmp/spdk.key-sha384.Yqq /tmp/spdk.key-sha256.eCU '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:23.736 00:22:23.736 real 2m11.856s 00:22:23.736 user 5m3.226s 00:22:23.736 sys 0m20.974s 00:22:23.736 00:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:23.736 00:47:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.736 ************************************ 00:22:23.736 END TEST nvmf_auth_target 00:22:23.736 ************************************ 00:22:23.736 00:47:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:23.736 00:47:35 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:23.736 00:47:35 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:23.736 00:47:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:23.736 00:47:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:23.736 00:47:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:23.736 ************************************ 00:22:23.736 START TEST nvmf_bdevio_no_huge 00:22:23.736 ************************************ 00:22:23.736 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:23.995 * Looking for test storage... 00:22:23.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:23.995 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.996 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.996 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.996 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:23.996 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:23.996 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:23.996 00:47:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:29.273 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.273 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.532 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.532 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.532 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.532 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.532 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:29.532 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:29.532 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.532 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.532 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.532 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.532 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.532 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:29.532 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:29.532 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:29.532 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.532 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:29.533 Found net devices under 0000:86:00.0: cvl_0_0 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:29.533 Found net devices under 0000:86:00.1: cvl_0_1 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:29.533 00:47:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.533 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.533 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.533 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:29.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:22:29.533 00:22:29.533 --- 10.0.0.2 ping statistics --- 00:22:29.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.533 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:22:29.533 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:22:29.533 00:22:29.533 --- 10.0.0.1 ping statistics --- 00:22:29.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.533 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:22:29.533 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.533 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:29.533 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:29.533 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.533 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:29.533 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:29.533 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.533 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:29.533 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:29.792 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:29.792 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.792 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:29.792 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:29.792 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1433977 00:22:29.792 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:29.792 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1433977 00:22:29.792 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1433977 ']' 00:22:29.792 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.792 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.792 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.792 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.792 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:29.792 [2024-07-13 00:47:41.156112] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:29.792 [2024-07-13 00:47:41.156157] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:29.792 [2024-07-13 00:47:41.230568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:29.792 [2024-07-13 00:47:41.296685] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.792 [2024-07-13 00:47:41.296721] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.792 [2024-07-13 00:47:41.296728] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.792 [2024-07-13 00:47:41.296734] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.792 [2024-07-13 00:47:41.296739] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.792 [2024-07-13 00:47:41.296863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:29.792 [2024-07-13 00:47:41.296970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:29.792 [2024-07-13 00:47:41.297062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:29.792 [2024-07-13 00:47:41.297062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.730 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:30.730 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:30.730 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:30.730 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:30.730 00:47:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.730 [2024-07-13 00:47:42.013379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.730 Malloc0 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.730 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.731 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.731 [2024-07-13 00:47:42.057608] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.731 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.731 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:30.731 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:30.731 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:30.731 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:30.731 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.731 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.731 { 00:22:30.731 "params": { 00:22:30.731 "name": "Nvme$subsystem", 00:22:30.731 "trtype": "$TEST_TRANSPORT", 00:22:30.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.731 "adrfam": "ipv4", 00:22:30.731 "trsvcid": "$NVMF_PORT", 00:22:30.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.731 "hdgst": ${hdgst:-false}, 00:22:30.731 "ddgst": ${ddgst:-false} 00:22:30.731 }, 00:22:30.731 "method": "bdev_nvme_attach_controller" 00:22:30.731 } 00:22:30.731 EOF 00:22:30.731 )") 00:22:30.731 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:30.731 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:30.731 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:30.731 00:47:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:30.731 "params": { 00:22:30.731 "name": "Nvme1", 00:22:30.731 "trtype": "tcp", 00:22:30.731 "traddr": "10.0.0.2", 00:22:30.731 "adrfam": "ipv4", 00:22:30.731 "trsvcid": "4420", 00:22:30.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.731 "hdgst": false, 00:22:30.731 "ddgst": false 00:22:30.731 }, 00:22:30.731 "method": "bdev_nvme_attach_controller" 00:22:30.731 }' 00:22:30.731 [2024-07-13 00:47:42.104671] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:30.731 [2024-07-13 00:47:42.104717] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1434207 ] 00:22:30.731 [2024-07-13 00:47:42.173680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:30.731 [2024-07-13 00:47:42.239163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.731 [2024-07-13 00:47:42.239270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.731 [2024-07-13 00:47:42.239270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.990 I/O targets: 00:22:30.990 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:30.990 00:22:30.990 00:22:30.990 CUnit - A unit testing framework for C - Version 2.1-3 00:22:30.990 http://cunit.sourceforge.net/ 00:22:30.990 00:22:30.990 00:22:30.990 Suite: bdevio tests on: Nvme1n1 00:22:30.990 Test: blockdev write read block ...passed 00:22:30.990 Test: blockdev write zeroes read block ...passed 00:22:30.990 Test: blockdev write zeroes read no split ...passed 00:22:30.990 Test: blockdev write zeroes read split ...passed 00:22:30.990 Test: blockdev write zeroes read split partial ...passed 00:22:30.990 Test: blockdev reset ...[2024-07-13 00:47:42.541213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:30.990 [2024-07-13 00:47:42.541283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ffe80 (9): Bad file descriptor 00:22:31.250 [2024-07-13 00:47:42.555313] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:31.250 passed 00:22:31.250 Test: blockdev write read 8 blocks ...passed 00:22:31.250 Test: blockdev write read size > 128k ...passed 00:22:31.250 Test: blockdev write read invalid size ...passed 00:22:31.250 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:31.250 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:31.250 Test: blockdev write read max offset ...passed 00:22:31.250 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:31.250 Test: blockdev writev readv 8 blocks ...passed 00:22:31.250 Test: blockdev writev readv 30 x 1block ...passed 00:22:31.250 Test: blockdev writev readv block ...passed 00:22:31.250 Test: blockdev writev readv size > 128k ...passed 00:22:31.250 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:31.250 Test: blockdev comparev and writev ...[2024-07-13 00:47:42.767081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.250 [2024-07-13 00:47:42.767110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.250 [2024-07-13 00:47:42.767123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.250 [2024-07-13 00:47:42.767131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:31.250 [2024-07-13 00:47:42.767387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.250 [2024-07-13 00:47:42.767398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:31.250 [2024-07-13 00:47:42.767409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.250 [2024-07-13 00:47:42.767416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:31.250 [2024-07-13 00:47:42.767653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.250 [2024-07-13 00:47:42.767663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:31.250 [2024-07-13 00:47:42.767674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.250 [2024-07-13 00:47:42.767681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:31.250 [2024-07-13 00:47:42.767918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.250 [2024-07-13 00:47:42.767927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:31.250 [2024-07-13 00:47:42.767939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.250 [2024-07-13 00:47:42.767946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:31.250 passed 00:22:31.508 Test: blockdev nvme passthru rw ...passed 00:22:31.508 Test: blockdev nvme passthru vendor specific ...[2024-07-13 00:47:42.849507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:31.508 [2024-07-13 00:47:42.849523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:31.508 [2024-07-13 00:47:42.849634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:31.508 [2024-07-13 00:47:42.849647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:31.508 [2024-07-13 00:47:42.849752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:31.508 [2024-07-13 00:47:42.849760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:31.508 [2024-07-13 00:47:42.849868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:31.508 [2024-07-13 00:47:42.849877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:31.508 passed 00:22:31.508 Test: blockdev nvme admin passthru ...passed 00:22:31.508 Test: blockdev copy ...passed 00:22:31.508 00:22:31.508 Run Summary: Type Total Ran Passed Failed Inactive 00:22:31.508 suites 1 1 n/a 0 0 00:22:31.508 tests 23 23 23 0 0 00:22:31.508 asserts 152 152 152 0 n/a 00:22:31.508 00:22:31.508 Elapsed time = 1.062 seconds 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:31.766 rmmod nvme_tcp 00:22:31.766 rmmod nvme_fabrics 00:22:31.766 rmmod nvme_keyring 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1433977 ']' 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1433977 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1433977 ']' 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1433977 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1433977 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1433977' 00:22:31.766 killing process with pid 1433977 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1433977 00:22:31.766 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1433977 00:22:32.334 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:32.334 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:32.334 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:32.334 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.334 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:32.334 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.334 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.334 00:47:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.240 00:47:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:34.240 00:22:34.240 real 0m10.377s 00:22:34.240 user 0m12.351s 00:22:34.240 sys 0m5.210s 00:22:34.240 00:47:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:34.240 00:47:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:34.240 ************************************ 00:22:34.240 END TEST nvmf_bdevio_no_huge 00:22:34.240 ************************************ 00:22:34.240 00:47:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:34.240 00:47:45 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:34.240 00:47:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:34.240 00:47:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:34.240 00:47:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.240 ************************************ 00:22:34.240 START TEST nvmf_tls 00:22:34.240 ************************************ 00:22:34.240 00:47:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:34.501 * Looking for test storage... 00:22:34.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:34.501 00:47:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:39.776 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:39.777 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:39.777 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:39.777 Found net devices under 0000:86:00.0: cvl_0_0 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:39.777 Found net devices under 0000:86:00.1: cvl_0_1 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.777 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:40.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:22:40.036 00:22:40.036 --- 10.0.0.2 ping statistics --- 00:22:40.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.036 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:22:40.036 00:22:40.036 --- 10.0.0.1 ping statistics --- 00:22:40.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.036 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1437939 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1437939 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1437939 ']' 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.036 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.295 [2024-07-13 00:47:51.597124] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:40.295 [2024-07-13 00:47:51.597168] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.295 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.295 [2024-07-13 00:47:51.670136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.295 [2024-07-13 00:47:51.708476] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.295 [2024-07-13 00:47:51.708517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.295 [2024-07-13 00:47:51.708523] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.295 [2024-07-13 00:47:51.708529] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.295 [2024-07-13 00:47:51.708534] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.295 [2024-07-13 00:47:51.708568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.862 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.863 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:40.863 00:47:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:40.863 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:40.863 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.121 00:47:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.121 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:41.121 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:41.121 true 00:22:41.121 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.121 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:41.390 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:41.390 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:41.390 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:41.655 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.655 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:41.655 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:41.655 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:41.655 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:41.914 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.914 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:42.172 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:42.172 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:42.172 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.172 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:42.172 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:42.172 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:42.172 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:42.431 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.431 00:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:42.689 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:42.689 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:42.689 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:42.689 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.689 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.1nRh3goj4V 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.9tkZ0SsZ5g 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.1nRh3goj4V 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9tkZ0SsZ5g 00:22:42.948 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:43.205 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:43.463 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.1nRh3goj4V 00:22:43.463 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1nRh3goj4V 00:22:43.463 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:43.721 [2024-07-13 00:47:55.036617] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.721 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:43.721 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:43.981 [2024-07-13 00:47:55.377495] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:43.981 [2024-07-13 00:47:55.377700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.981 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:44.299 malloc0 00:22:44.299 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:44.299 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1nRh3goj4V 00:22:44.558 [2024-07-13 00:47:55.911237] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:44.558 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.1nRh3goj4V 00:22:44.558 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.543 Initializing NVMe Controllers 00:22:54.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:54.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:54.543 Initialization complete. Launching workers. 00:22:54.543 ======================================================== 00:22:54.543 Latency(us) 00:22:54.543 Device Information : IOPS MiB/s Average min max 00:22:54.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16332.95 63.80 3918.88 912.47 5954.34 00:22:54.543 ======================================================== 00:22:54.543 Total : 16332.95 63.80 3918.88 912.47 5954.34 00:22:54.543 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1nRh3goj4V 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1nRh3goj4V' 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1440425 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1440425 /var/tmp/bdevperf.sock 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1440425 ']' 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.543 00:48:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:54.543 [2024-07-13 00:48:06.080807] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:54.543 [2024-07-13 00:48:06.080859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440425 ] 00:22:54.802 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.802 [2024-07-13 00:48:06.147886] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.802 [2024-07-13 00:48:06.188641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.802 00:48:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.802 00:48:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:54.802 00:48:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1nRh3goj4V 00:22:55.061 [2024-07-13 00:48:06.428000] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:55.061 [2024-07-13 00:48:06.428067] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:55.062 TLSTESTn1 00:22:55.062 00:48:06 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:55.062 Running I/O for 10 seconds... 00:23:07.275 00:23:07.275 Latency(us) 00:23:07.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.275 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:07.275 Verification LBA range: start 0x0 length 0x2000 00:23:07.275 TLSTESTn1 : 10.02 4842.07 18.91 0.00 0.00 26394.56 5442.34 31685.23 00:23:07.275 =================================================================================================================== 00:23:07.275 Total : 4842.07 18.91 0.00 0.00 26394.56 5442.34 31685.23 00:23:07.275 0 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1440425 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1440425 ']' 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1440425 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1440425 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1440425' 00:23:07.275 killing process with pid 1440425 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1440425 00:23:07.275 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.275 00:23:07.275 Latency(us) 00:23:07.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.275 =================================================================================================================== 00:23:07.275 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.275 [2024-07-13 00:48:16.702973] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1440425 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9tkZ0SsZ5g 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9tkZ0SsZ5g 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9tkZ0SsZ5g 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9tkZ0SsZ5g' 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1442491 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1442491 /var/tmp/bdevperf.sock 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1442491 ']' 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.275 00:48:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.275 [2024-07-13 00:48:16.922590] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:07.275 [2024-07-13 00:48:16.922638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442491 ] 00:23:07.275 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.275 [2024-07-13 00:48:16.988054] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.275 [2024-07-13 00:48:17.027007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9tkZ0SsZ5g 00:23:07.275 [2024-07-13 00:48:17.275022] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.275 [2024-07-13 00:48:17.275115] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:07.275 [2024-07-13 00:48:17.280599] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:07.275 [2024-07-13 00:48:17.281326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7fff0 (107): Transport endpoint is not connected 00:23:07.275 [2024-07-13 00:48:17.282320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7fff0 (9): Bad file descriptor 00:23:07.275 [2024-07-13 00:48:17.283321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:07.275 [2024-07-13 00:48:17.283330] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:07.275 [2024-07-13 00:48:17.283339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:07.275 request: 00:23:07.275 { 00:23:07.275 "name": "TLSTEST", 00:23:07.275 "trtype": "tcp", 00:23:07.275 "traddr": "10.0.0.2", 00:23:07.275 "adrfam": "ipv4", 00:23:07.275 "trsvcid": "4420", 00:23:07.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.275 "prchk_reftag": false, 00:23:07.275 "prchk_guard": false, 00:23:07.275 "hdgst": false, 00:23:07.275 "ddgst": false, 00:23:07.275 "psk": "/tmp/tmp.9tkZ0SsZ5g", 00:23:07.275 "method": "bdev_nvme_attach_controller", 00:23:07.275 "req_id": 1 00:23:07.275 } 00:23:07.275 Got JSON-RPC error response 00:23:07.275 response: 00:23:07.275 { 00:23:07.275 "code": -5, 00:23:07.275 "message": "Input/output error" 00:23:07.275 } 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1442491 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1442491 ']' 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1442491 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1442491 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1442491' 00:23:07.275 killing process with pid 1442491 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1442491 00:23:07.275 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.275 00:23:07.275 Latency(us) 00:23:07.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.275 =================================================================================================================== 00:23:07.275 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:07.275 [2024-07-13 00:48:17.352138] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1442491 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1nRh3goj4V 00:23:07.275 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1nRh3goj4V 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1nRh3goj4V 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1nRh3goj4V' 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1442661 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1442661 /var/tmp/bdevperf.sock 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1442661 ']' 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.276 [2024-07-13 00:48:17.564942] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:07.276 [2024-07-13 00:48:17.564992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442661 ] 00:23:07.276 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.276 [2024-07-13 00:48:17.631185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.276 [2024-07-13 00:48:17.671472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.1nRh3goj4V 00:23:07.276 [2024-07-13 00:48:17.907776] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.276 [2024-07-13 00:48:17.907854] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:07.276 [2024-07-13 00:48:17.912423] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:07.276 [2024-07-13 00:48:17.912447] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:07.276 [2024-07-13 00:48:17.912471] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:07.276 [2024-07-13 00:48:17.913118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1aff0 (107): Transport endpoint is not connected 00:23:07.276 [2024-07-13 00:48:17.914110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1aff0 (9): Bad file descriptor 00:23:07.276 [2024-07-13 00:48:17.915111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:07.276 [2024-07-13 00:48:17.915120] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:07.276 [2024-07-13 00:48:17.915128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:07.276 request: 00:23:07.276 { 00:23:07.276 "name": "TLSTEST", 00:23:07.276 "trtype": "tcp", 00:23:07.276 "traddr": "10.0.0.2", 00:23:07.276 "adrfam": "ipv4", 00:23:07.276 "trsvcid": "4420", 00:23:07.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.276 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:07.276 "prchk_reftag": false, 00:23:07.276 "prchk_guard": false, 00:23:07.276 "hdgst": false, 00:23:07.276 "ddgst": false, 00:23:07.276 "psk": "/tmp/tmp.1nRh3goj4V", 00:23:07.276 "method": "bdev_nvme_attach_controller", 00:23:07.276 "req_id": 1 00:23:07.276 } 00:23:07.276 Got JSON-RPC error response 00:23:07.276 response: 00:23:07.276 { 00:23:07.276 "code": -5, 00:23:07.276 "message": "Input/output error" 00:23:07.276 } 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1442661 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1442661 ']' 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1442661 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1442661 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1442661' 00:23:07.276 killing process with pid 1442661 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1442661 00:23:07.276 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.276 00:23:07.276 Latency(us) 00:23:07.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.276 =================================================================================================================== 00:23:07.276 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:07.276 [2024-07-13 00:48:17.985893] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:07.276 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1442661 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1nRh3goj4V 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1nRh3goj4V 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1nRh3goj4V 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1nRh3goj4V' 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1442675 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1442675 /var/tmp/bdevperf.sock 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1442675 ']' 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.276 [2024-07-13 00:48:18.198370] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:07.276 [2024-07-13 00:48:18.198417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442675 ] 00:23:07.276 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.276 [2024-07-13 00:48:18.266786] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.276 [2024-07-13 00:48:18.306319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:07.276 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1nRh3goj4V 00:23:07.276 [2024-07-13 00:48:18.554270] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.276 [2024-07-13 00:48:18.554347] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:07.276 [2024-07-13 00:48:18.563767] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:07.276 [2024-07-13 00:48:18.563788] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:07.276 [2024-07-13 00:48:18.563809] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:07.276 [2024-07-13 00:48:18.564514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176ff0 (107): Transport endpoint is not connected 00:23:07.276 [2024-07-13 00:48:18.565508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176ff0 (9): Bad file descriptor 00:23:07.276 [2024-07-13 00:48:18.566509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:07.277 [2024-07-13 00:48:18.566522] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:07.277 [2024-07-13 00:48:18.566531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:07.277 request: 00:23:07.277 { 00:23:07.277 "name": "TLSTEST", 00:23:07.277 "trtype": "tcp", 00:23:07.277 "traddr": "10.0.0.2", 00:23:07.277 "adrfam": "ipv4", 00:23:07.277 "trsvcid": "4420", 00:23:07.277 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:07.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.277 "prchk_reftag": false, 00:23:07.277 "prchk_guard": false, 00:23:07.277 "hdgst": false, 00:23:07.277 "ddgst": false, 00:23:07.277 "psk": "/tmp/tmp.1nRh3goj4V", 00:23:07.277 "method": "bdev_nvme_attach_controller", 00:23:07.277 "req_id": 1 00:23:07.277 } 00:23:07.277 Got JSON-RPC error response 00:23:07.277 response: 00:23:07.277 { 00:23:07.277 "code": -5, 00:23:07.277 "message": "Input/output error" 00:23:07.277 } 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1442675 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1442675 ']' 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1442675 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1442675 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1442675' 00:23:07.277 killing process with pid 1442675 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1442675 00:23:07.277 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.277 00:23:07.277 Latency(us) 00:23:07.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.277 =================================================================================================================== 00:23:07.277 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:07.277 [2024-07-13 00:48:18.640252] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1442675 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1442907 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1442907 /var/tmp/bdevperf.sock 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1442907 ']' 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.277 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.536 [2024-07-13 00:48:18.854087] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:07.536 [2024-07-13 00:48:18.854134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442907 ] 00:23:07.536 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.536 [2024-07-13 00:48:18.922190] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.536 [2024-07-13 00:48:18.958782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.536 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.536 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:07.536 00:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:07.796 [2024-07-13 00:48:19.205456] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:07.796 [2024-07-13 00:48:19.207221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e785e0 (9): Bad file descriptor 00:23:07.796 [2024-07-13 00:48:19.208218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:07.796 [2024-07-13 00:48:19.208229] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:07.796 [2024-07-13 00:48:19.208238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:07.796 request: 00:23:07.796 { 00:23:07.796 "name": "TLSTEST", 00:23:07.796 "trtype": "tcp", 00:23:07.796 "traddr": "10.0.0.2", 00:23:07.796 "adrfam": "ipv4", 00:23:07.796 "trsvcid": "4420", 00:23:07.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.796 "prchk_reftag": false, 00:23:07.796 "prchk_guard": false, 00:23:07.796 "hdgst": false, 00:23:07.796 "ddgst": false, 00:23:07.796 "method": "bdev_nvme_attach_controller", 00:23:07.796 "req_id": 1 00:23:07.796 } 00:23:07.796 Got JSON-RPC error response 00:23:07.796 response: 00:23:07.796 { 00:23:07.796 "code": -5, 00:23:07.796 "message": "Input/output error" 00:23:07.796 } 00:23:07.796 00:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1442907 00:23:07.796 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1442907 ']' 00:23:07.796 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1442907 00:23:07.796 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:07.796 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.796 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1442907 00:23:07.796 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:07.796 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:07.796 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1442907' 00:23:07.796 killing process with pid 1442907 00:23:07.796 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1442907 00:23:07.796 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.796 00:23:07.796 Latency(us) 00:23:07.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.796 =================================================================================================================== 00:23:07.796 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:07.796 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1442907 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1437939 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1437939 ']' 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1437939 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1437939 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1437939' 00:23:08.055 killing process with pid 1437939 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1437939 00:23:08.055 [2024-07-13 00:48:19.492368] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:08.055 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1437939 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.3mTVh2ia4q 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.3mTVh2ia4q 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1443011 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1443011 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1443011 ']' 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.315 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.315 [2024-07-13 00:48:19.782797] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:08.315 [2024-07-13 00:48:19.782844] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.315 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.315 [2024-07-13 00:48:19.848844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.574 [2024-07-13 00:48:19.888436] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.574 [2024-07-13 00:48:19.888473] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.574 [2024-07-13 00:48:19.888481] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.574 [2024-07-13 00:48:19.888487] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.574 [2024-07-13 00:48:19.888493] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.574 [2024-07-13 00:48:19.888528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.574 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.574 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:08.575 00:48:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:08.575 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.575 00:48:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.575 00:48:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.575 00:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.3mTVh2ia4q 00:23:08.575 00:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3mTVh2ia4q 00:23:08.575 00:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:08.833 [2024-07-13 00:48:20.181474] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.833 00:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:09.092 00:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:09.092 [2024-07-13 00:48:20.558432] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:09.092 [2024-07-13 00:48:20.558639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.092 00:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:09.350 malloc0 00:23:09.350 00:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:09.610 00:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3mTVh2ia4q 00:23:09.610 [2024-07-13 00:48:21.112187] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3mTVh2ia4q 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3mTVh2ia4q' 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1443253 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1443253 /var/tmp/bdevperf.sock 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1443253 ']' 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.610 00:48:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.869 [2024-07-13 00:48:21.183240] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:09.869 [2024-07-13 00:48:21.183287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1443253 ] 00:23:09.869 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.869 [2024-07-13 00:48:21.249043] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.869 [2024-07-13 00:48:21.288624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.869 00:48:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.869 00:48:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:09.869 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3mTVh2ia4q 00:23:10.128 [2024-07-13 00:48:21.536505] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.128 [2024-07-13 00:48:21.536585] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:10.128 TLSTESTn1 00:23:10.128 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:10.387 Running I/O for 10 seconds... 00:23:20.366 00:23:20.367 Latency(us) 00:23:20.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.367 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:20.367 Verification LBA range: start 0x0 length 0x2000 00:23:20.367 TLSTESTn1 : 10.03 5438.52 21.24 0.00 0.00 23492.78 4900.95 35788.35 00:23:20.367 =================================================================================================================== 00:23:20.367 Total : 5438.52 21.24 0.00 0.00 23492.78 4900.95 35788.35 00:23:20.367 0 00:23:20.367 00:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:20.367 00:48:31 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1443253 00:23:20.367 00:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1443253 ']' 00:23:20.367 00:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1443253 00:23:20.367 00:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:20.367 00:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.367 00:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1443253 00:23:20.367 00:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:20.367 00:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:20.367 00:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1443253' 00:23:20.367 killing process with pid 1443253 00:23:20.367 00:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1443253 00:23:20.367 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.367 00:23:20.367 Latency(us) 00:23:20.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.367 =================================================================================================================== 00:23:20.367 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:20.367 [2024-07-13 00:48:31.835643] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:20.367 00:48:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1443253 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.3mTVh2ia4q 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3mTVh2ia4q 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3mTVh2ia4q 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3mTVh2ia4q 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3mTVh2ia4q' 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1445019 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1445019 /var/tmp/bdevperf.sock 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1445019 ']' 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.626 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.626 [2024-07-13 00:48:32.059509] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:20.626 [2024-07-13 00:48:32.059559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1445019 ] 00:23:20.626 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.626 [2024-07-13 00:48:32.122819] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.627 [2024-07-13 00:48:32.159109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.885 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.885 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:20.885 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3mTVh2ia4q 00:23:20.885 [2024-07-13 00:48:32.410129] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.885 [2024-07-13 00:48:32.410182] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:20.885 [2024-07-13 00:48:32.410189] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.3mTVh2ia4q 00:23:20.885 request: 00:23:20.885 { 00:23:20.885 "name": "TLSTEST", 00:23:20.885 "trtype": "tcp", 00:23:20.885 "traddr": "10.0.0.2", 00:23:20.885 "adrfam": "ipv4", 00:23:20.886 "trsvcid": "4420", 00:23:20.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.886 "prchk_reftag": false, 00:23:20.886 "prchk_guard": false, 00:23:20.886 "hdgst": false, 00:23:20.886 "ddgst": false, 00:23:20.886 "psk": "/tmp/tmp.3mTVh2ia4q", 00:23:20.886 "method": "bdev_nvme_attach_controller", 00:23:20.886 "req_id": 1 00:23:20.886 } 00:23:20.886 Got JSON-RPC error response 00:23:20.886 response: 00:23:20.886 { 00:23:20.886 "code": -1, 00:23:20.886 "message": "Operation not permitted" 00:23:20.886 } 00:23:20.886 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1445019 00:23:20.886 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1445019 ']' 00:23:20.886 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1445019 00:23:20.886 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1445019 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1445019' 00:23:21.145 killing process with pid 1445019 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1445019 00:23:21.145 Received shutdown signal, test time was about 10.000000 seconds 00:23:21.145 00:23:21.145 Latency(us) 00:23:21.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.145 =================================================================================================================== 00:23:21.145 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1445019 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1443011 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1443011 ']' 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1443011 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1443011 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1443011' 00:23:21.145 killing process with pid 1443011 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1443011 00:23:21.145 [2024-07-13 00:48:32.694219] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:21.145 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1443011 00:23:21.408 00:48:32 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:21.408 00:48:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:21.408 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:21.408 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.408 00:48:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1445254 00:23:21.408 00:48:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:21.408 00:48:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1445254 00:23:21.408 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1445254 ']' 00:23:21.408 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.408 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.408 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.408 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.408 00:48:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.408 [2024-07-13 00:48:32.940160] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:21.408 [2024-07-13 00:48:32.940207] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.668 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.668 [2024-07-13 00:48:33.009639] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.668 [2024-07-13 00:48:33.049026] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.668 [2024-07-13 00:48:33.049065] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.668 [2024-07-13 00:48:33.049077] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.668 [2024-07-13 00:48:33.049083] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.669 [2024-07-13 00:48:33.049088] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.669 [2024-07-13 00:48:33.049102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.3mTVh2ia4q 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.3mTVh2ia4q 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.3mTVh2ia4q 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3mTVh2ia4q 00:23:21.669 00:48:33 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:21.927 [2024-07-13 00:48:33.329550] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.927 00:48:33 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:22.186 00:48:33 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:22.186 [2024-07-13 00:48:33.690454] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:22.186 [2024-07-13 00:48:33.690630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.186 00:48:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:22.445 malloc0 00:23:22.445 00:48:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:22.704 00:48:34 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3mTVh2ia4q 00:23:22.704 [2024-07-13 00:48:34.247999] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:22.704 [2024-07-13 00:48:34.248022] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:22.704 [2024-07-13 00:48:34.248043] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:22.704 request: 00:23:22.704 { 00:23:22.704 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.704 "host": "nqn.2016-06.io.spdk:host1", 00:23:22.704 "psk": "/tmp/tmp.3mTVh2ia4q", 00:23:22.704 "method": "nvmf_subsystem_add_host", 00:23:22.704 "req_id": 1 00:23:22.704 } 00:23:22.704 Got JSON-RPC error response 00:23:22.704 response: 00:23:22.704 { 00:23:22.704 "code": -32603, 00:23:22.704 "message": "Internal error" 00:23:22.704 } 00:23:22.962 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:22.962 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:22.962 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:22.962 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:22.962 00:48:34 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1445254 00:23:22.962 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1445254 ']' 00:23:22.962 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1445254 00:23:22.962 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1445254 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1445254' 00:23:22.963 killing process with pid 1445254 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1445254 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1445254 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.3mTVh2ia4q 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1445514 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1445514 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1445514 ']' 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:22.963 00:48:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.221 [2024-07-13 00:48:34.562267] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:23.221 [2024-07-13 00:48:34.562315] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.221 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.221 [2024-07-13 00:48:34.634617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.221 [2024-07-13 00:48:34.674066] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.221 [2024-07-13 00:48:34.674104] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.221 [2024-07-13 00:48:34.674111] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.221 [2024-07-13 00:48:34.674117] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.221 [2024-07-13 00:48:34.674122] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.221 [2024-07-13 00:48:34.674138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.157 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.157 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:24.157 00:48:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.157 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:24.157 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.157 00:48:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.157 00:48:35 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.3mTVh2ia4q 00:23:24.157 00:48:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3mTVh2ia4q 00:23:24.157 00:48:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:24.157 [2024-07-13 00:48:35.567336] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.157 00:48:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:24.416 00:48:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:24.416 [2024-07-13 00:48:35.928264] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:24.416 [2024-07-13 00:48:35.928432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.416 00:48:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:24.682 malloc0 00:23:24.682 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:24.942 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3mTVh2ia4q 00:23:24.942 [2024-07-13 00:48:36.453638] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:24.942 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:24.942 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1445775 00:23:24.942 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:24.942 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1445775 /var/tmp/bdevperf.sock 00:23:24.942 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1445775 ']' 00:23:24.942 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.942 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.942 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.942 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.942 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.201 [2024-07-13 00:48:36.511872] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:25.201 [2024-07-13 00:48:36.511919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1445775 ] 00:23:25.201 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.201 [2024-07-13 00:48:36.577113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.201 [2024-07-13 00:48:36.616682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.201 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.201 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:25.201 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3mTVh2ia4q 00:23:25.517 [2024-07-13 00:48:36.867947] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:25.518 [2024-07-13 00:48:36.868037] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:25.518 TLSTESTn1 00:23:25.518 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:25.807 00:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:25.807 "subsystems": [ 00:23:25.807 { 00:23:25.807 "subsystem": "keyring", 00:23:25.807 "config": [] 00:23:25.807 }, 00:23:25.807 { 00:23:25.807 "subsystem": "iobuf", 00:23:25.807 "config": [ 00:23:25.807 { 00:23:25.807 "method": "iobuf_set_options", 00:23:25.807 "params": { 00:23:25.807 "small_pool_count": 8192, 00:23:25.807 "large_pool_count": 1024, 00:23:25.807 "small_bufsize": 8192, 00:23:25.807 "large_bufsize": 135168 00:23:25.807 } 00:23:25.807 } 00:23:25.807 ] 00:23:25.807 }, 00:23:25.807 { 00:23:25.807 "subsystem": "sock", 00:23:25.807 "config": [ 00:23:25.807 { 00:23:25.807 "method": "sock_set_default_impl", 00:23:25.807 "params": { 00:23:25.807 "impl_name": "posix" 00:23:25.807 } 00:23:25.807 }, 00:23:25.807 { 00:23:25.807 "method": "sock_impl_set_options", 00:23:25.807 "params": { 00:23:25.807 "impl_name": "ssl", 00:23:25.807 "recv_buf_size": 4096, 00:23:25.807 "send_buf_size": 4096, 00:23:25.807 "enable_recv_pipe": true, 00:23:25.807 "enable_quickack": false, 00:23:25.807 "enable_placement_id": 0, 00:23:25.807 "enable_zerocopy_send_server": true, 00:23:25.807 "enable_zerocopy_send_client": false, 00:23:25.807 "zerocopy_threshold": 0, 00:23:25.807 "tls_version": 0, 00:23:25.807 "enable_ktls": false 00:23:25.807 } 00:23:25.807 }, 00:23:25.807 { 00:23:25.807 "method": "sock_impl_set_options", 00:23:25.807 "params": { 00:23:25.807 "impl_name": "posix", 00:23:25.807 "recv_buf_size": 2097152, 00:23:25.807 "send_buf_size": 2097152, 00:23:25.807 "enable_recv_pipe": true, 00:23:25.807 "enable_quickack": false, 00:23:25.807 "enable_placement_id": 0, 00:23:25.808 "enable_zerocopy_send_server": true, 00:23:25.808 "enable_zerocopy_send_client": false, 00:23:25.808 "zerocopy_threshold": 0, 00:23:25.808 "tls_version": 0, 00:23:25.808 "enable_ktls": false 00:23:25.808 } 00:23:25.808 } 00:23:25.808 ] 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "subsystem": "vmd", 00:23:25.808 "config": [] 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "subsystem": "accel", 00:23:25.808 "config": [ 00:23:25.808 { 00:23:25.808 "method": "accel_set_options", 00:23:25.808 "params": { 00:23:25.808 "small_cache_size": 128, 00:23:25.808 "large_cache_size": 16, 00:23:25.808 "task_count": 2048, 00:23:25.808 "sequence_count": 2048, 00:23:25.808 "buf_count": 2048 00:23:25.808 } 00:23:25.808 } 00:23:25.808 ] 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "subsystem": "bdev", 00:23:25.808 "config": [ 00:23:25.808 { 00:23:25.808 "method": "bdev_set_options", 00:23:25.808 "params": { 00:23:25.808 "bdev_io_pool_size": 65535, 00:23:25.808 "bdev_io_cache_size": 256, 00:23:25.808 "bdev_auto_examine": true, 00:23:25.808 "iobuf_small_cache_size": 128, 00:23:25.808 "iobuf_large_cache_size": 16 00:23:25.808 } 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "method": "bdev_raid_set_options", 00:23:25.808 "params": { 00:23:25.808 "process_window_size_kb": 1024 00:23:25.808 } 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "method": "bdev_iscsi_set_options", 00:23:25.808 "params": { 00:23:25.808 "timeout_sec": 30 00:23:25.808 } 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "method": "bdev_nvme_set_options", 00:23:25.808 "params": { 00:23:25.808 "action_on_timeout": "none", 00:23:25.808 "timeout_us": 0, 00:23:25.808 "timeout_admin_us": 0, 00:23:25.808 "keep_alive_timeout_ms": 10000, 00:23:25.808 "arbitration_burst": 0, 00:23:25.808 "low_priority_weight": 0, 00:23:25.808 "medium_priority_weight": 0, 00:23:25.808 "high_priority_weight": 0, 00:23:25.808 "nvme_adminq_poll_period_us": 10000, 00:23:25.808 "nvme_ioq_poll_period_us": 0, 00:23:25.808 "io_queue_requests": 0, 00:23:25.808 "delay_cmd_submit": true, 00:23:25.808 "transport_retry_count": 4, 00:23:25.808 "bdev_retry_count": 3, 00:23:25.808 "transport_ack_timeout": 0, 00:23:25.808 "ctrlr_loss_timeout_sec": 0, 00:23:25.808 "reconnect_delay_sec": 0, 00:23:25.808 "fast_io_fail_timeout_sec": 0, 00:23:25.808 "disable_auto_failback": false, 00:23:25.808 "generate_uuids": false, 00:23:25.808 "transport_tos": 0, 00:23:25.808 "nvme_error_stat": false, 00:23:25.808 "rdma_srq_size": 0, 00:23:25.808 "io_path_stat": false, 00:23:25.808 "allow_accel_sequence": false, 00:23:25.808 "rdma_max_cq_size": 0, 00:23:25.808 "rdma_cm_event_timeout_ms": 0, 00:23:25.808 "dhchap_digests": [ 00:23:25.808 "sha256", 00:23:25.808 "sha384", 00:23:25.808 "sha512" 00:23:25.808 ], 00:23:25.808 "dhchap_dhgroups": [ 00:23:25.808 "null", 00:23:25.808 "ffdhe2048", 00:23:25.808 "ffdhe3072", 00:23:25.808 "ffdhe4096", 00:23:25.808 "ffdhe6144", 00:23:25.808 "ffdhe8192" 00:23:25.808 ] 00:23:25.808 } 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "method": "bdev_nvme_set_hotplug", 00:23:25.808 "params": { 00:23:25.808 "period_us": 100000, 00:23:25.808 "enable": false 00:23:25.808 } 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "method": "bdev_malloc_create", 00:23:25.808 "params": { 00:23:25.808 "name": "malloc0", 00:23:25.808 "num_blocks": 8192, 00:23:25.808 "block_size": 4096, 00:23:25.808 "physical_block_size": 4096, 00:23:25.808 "uuid": "58a8ace4-3d46-4036-8411-7161931f55d9", 00:23:25.808 "optimal_io_boundary": 0 00:23:25.808 } 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "method": "bdev_wait_for_examine" 00:23:25.808 } 00:23:25.808 ] 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "subsystem": "nbd", 00:23:25.808 "config": [] 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "subsystem": "scheduler", 00:23:25.808 "config": [ 00:23:25.808 { 00:23:25.808 "method": "framework_set_scheduler", 00:23:25.808 "params": { 00:23:25.808 "name": "static" 00:23:25.808 } 00:23:25.808 } 00:23:25.808 ] 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "subsystem": "nvmf", 00:23:25.808 "config": [ 00:23:25.808 { 00:23:25.808 "method": "nvmf_set_config", 00:23:25.808 "params": { 00:23:25.808 "discovery_filter": "match_any", 00:23:25.808 "admin_cmd_passthru": { 00:23:25.808 "identify_ctrlr": false 00:23:25.808 } 00:23:25.808 } 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "method": "nvmf_set_max_subsystems", 00:23:25.808 "params": { 00:23:25.808 "max_subsystems": 1024 00:23:25.808 } 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "method": "nvmf_set_crdt", 00:23:25.808 "params": { 00:23:25.808 "crdt1": 0, 00:23:25.808 "crdt2": 0, 00:23:25.808 "crdt3": 0 00:23:25.808 } 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "method": "nvmf_create_transport", 00:23:25.808 "params": { 00:23:25.808 "trtype": "TCP", 00:23:25.808 "max_queue_depth": 128, 00:23:25.808 "max_io_qpairs_per_ctrlr": 127, 00:23:25.808 "in_capsule_data_size": 4096, 00:23:25.808 "max_io_size": 131072, 00:23:25.808 "io_unit_size": 131072, 00:23:25.808 "max_aq_depth": 128, 00:23:25.808 "num_shared_buffers": 511, 00:23:25.808 "buf_cache_size": 4294967295, 00:23:25.808 "dif_insert_or_strip": false, 00:23:25.808 "zcopy": false, 00:23:25.808 "c2h_success": false, 00:23:25.808 "sock_priority": 0, 00:23:25.808 "abort_timeout_sec": 1, 00:23:25.808 "ack_timeout": 0, 00:23:25.808 "data_wr_pool_size": 0 00:23:25.808 } 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "method": "nvmf_create_subsystem", 00:23:25.808 "params": { 00:23:25.808 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.808 "allow_any_host": false, 00:23:25.808 "serial_number": "SPDK00000000000001", 00:23:25.808 "model_number": "SPDK bdev Controller", 00:23:25.808 "max_namespaces": 10, 00:23:25.808 "min_cntlid": 1, 00:23:25.808 "max_cntlid": 65519, 00:23:25.808 "ana_reporting": false 00:23:25.808 } 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "method": "nvmf_subsystem_add_host", 00:23:25.808 "params": { 00:23:25.808 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.808 "host": "nqn.2016-06.io.spdk:host1", 00:23:25.808 "psk": "/tmp/tmp.3mTVh2ia4q" 00:23:25.808 } 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "method": "nvmf_subsystem_add_ns", 00:23:25.808 "params": { 00:23:25.808 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.808 "namespace": { 00:23:25.808 "nsid": 1, 00:23:25.808 "bdev_name": "malloc0", 00:23:25.808 "nguid": "58A8ACE43D46403684117161931F55D9", 00:23:25.808 "uuid": "58a8ace4-3d46-4036-8411-7161931f55d9", 00:23:25.808 "no_auto_visible": false 00:23:25.808 } 00:23:25.808 } 00:23:25.808 }, 00:23:25.808 { 00:23:25.808 "method": "nvmf_subsystem_add_listener", 00:23:25.808 "params": { 00:23:25.808 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.808 "listen_address": { 00:23:25.808 "trtype": "TCP", 00:23:25.808 "adrfam": "IPv4", 00:23:25.808 "traddr": "10.0.0.2", 00:23:25.808 "trsvcid": "4420" 00:23:25.808 }, 00:23:25.808 "secure_channel": true 00:23:25.808 } 00:23:25.808 } 00:23:25.808 ] 00:23:25.808 } 00:23:25.808 ] 00:23:25.808 }' 00:23:25.808 00:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:26.068 00:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:26.068 "subsystems": [ 00:23:26.068 { 00:23:26.068 "subsystem": "keyring", 00:23:26.068 "config": [] 00:23:26.068 }, 00:23:26.068 { 00:23:26.068 "subsystem": "iobuf", 00:23:26.068 "config": [ 00:23:26.068 { 00:23:26.068 "method": "iobuf_set_options", 00:23:26.068 "params": { 00:23:26.068 "small_pool_count": 8192, 00:23:26.068 "large_pool_count": 1024, 00:23:26.068 "small_bufsize": 8192, 00:23:26.068 "large_bufsize": 135168 00:23:26.068 } 00:23:26.068 } 00:23:26.068 ] 00:23:26.068 }, 00:23:26.068 { 00:23:26.068 "subsystem": "sock", 00:23:26.068 "config": [ 00:23:26.068 { 00:23:26.068 "method": "sock_set_default_impl", 00:23:26.068 "params": { 00:23:26.068 "impl_name": "posix" 00:23:26.068 } 00:23:26.068 }, 00:23:26.068 { 00:23:26.068 "method": "sock_impl_set_options", 00:23:26.068 "params": { 00:23:26.068 "impl_name": "ssl", 00:23:26.068 "recv_buf_size": 4096, 00:23:26.068 "send_buf_size": 4096, 00:23:26.068 "enable_recv_pipe": true, 00:23:26.068 "enable_quickack": false, 00:23:26.068 "enable_placement_id": 0, 00:23:26.068 "enable_zerocopy_send_server": true, 00:23:26.068 "enable_zerocopy_send_client": false, 00:23:26.068 "zerocopy_threshold": 0, 00:23:26.068 "tls_version": 0, 00:23:26.068 "enable_ktls": false 00:23:26.068 } 00:23:26.068 }, 00:23:26.068 { 00:23:26.068 "method": "sock_impl_set_options", 00:23:26.068 "params": { 00:23:26.068 "impl_name": "posix", 00:23:26.068 "recv_buf_size": 2097152, 00:23:26.068 "send_buf_size": 2097152, 00:23:26.068 "enable_recv_pipe": true, 00:23:26.068 "enable_quickack": false, 00:23:26.068 "enable_placement_id": 0, 00:23:26.068 "enable_zerocopy_send_server": true, 00:23:26.068 "enable_zerocopy_send_client": false, 00:23:26.068 "zerocopy_threshold": 0, 00:23:26.068 "tls_version": 0, 00:23:26.068 "enable_ktls": false 00:23:26.068 } 00:23:26.068 } 00:23:26.068 ] 00:23:26.068 }, 00:23:26.068 { 00:23:26.068 "subsystem": "vmd", 00:23:26.068 "config": [] 00:23:26.068 }, 00:23:26.068 { 00:23:26.068 "subsystem": "accel", 00:23:26.068 "config": [ 00:23:26.068 { 00:23:26.068 "method": "accel_set_options", 00:23:26.068 "params": { 00:23:26.068 "small_cache_size": 128, 00:23:26.068 "large_cache_size": 16, 00:23:26.068 "task_count": 2048, 00:23:26.068 "sequence_count": 2048, 00:23:26.068 "buf_count": 2048 00:23:26.068 } 00:23:26.068 } 00:23:26.068 ] 00:23:26.068 }, 00:23:26.068 { 00:23:26.068 "subsystem": "bdev", 00:23:26.068 "config": [ 00:23:26.068 { 00:23:26.068 "method": "bdev_set_options", 00:23:26.068 "params": { 00:23:26.068 "bdev_io_pool_size": 65535, 00:23:26.068 "bdev_io_cache_size": 256, 00:23:26.068 "bdev_auto_examine": true, 00:23:26.068 "iobuf_small_cache_size": 128, 00:23:26.068 "iobuf_large_cache_size": 16 00:23:26.068 } 00:23:26.068 }, 00:23:26.068 { 00:23:26.068 "method": "bdev_raid_set_options", 00:23:26.068 "params": { 00:23:26.068 "process_window_size_kb": 1024 00:23:26.068 } 00:23:26.068 }, 00:23:26.068 { 00:23:26.068 "method": "bdev_iscsi_set_options", 00:23:26.068 "params": { 00:23:26.068 "timeout_sec": 30 00:23:26.068 } 00:23:26.068 }, 00:23:26.068 { 00:23:26.068 "method": "bdev_nvme_set_options", 00:23:26.068 "params": { 00:23:26.068 "action_on_timeout": "none", 00:23:26.068 "timeout_us": 0, 00:23:26.068 "timeout_admin_us": 0, 00:23:26.068 "keep_alive_timeout_ms": 10000, 00:23:26.068 "arbitration_burst": 0, 00:23:26.068 "low_priority_weight": 0, 00:23:26.068 "medium_priority_weight": 0, 00:23:26.068 "high_priority_weight": 0, 00:23:26.068 "nvme_adminq_poll_period_us": 10000, 00:23:26.068 "nvme_ioq_poll_period_us": 0, 00:23:26.068 "io_queue_requests": 512, 00:23:26.068 "delay_cmd_submit": true, 00:23:26.068 "transport_retry_count": 4, 00:23:26.068 "bdev_retry_count": 3, 00:23:26.068 "transport_ack_timeout": 0, 00:23:26.068 "ctrlr_loss_timeout_sec": 0, 00:23:26.068 "reconnect_delay_sec": 0, 00:23:26.068 "fast_io_fail_timeout_sec": 0, 00:23:26.068 "disable_auto_failback": false, 00:23:26.068 "generate_uuids": false, 00:23:26.068 "transport_tos": 0, 00:23:26.068 "nvme_error_stat": false, 00:23:26.068 "rdma_srq_size": 0, 00:23:26.068 "io_path_stat": false, 00:23:26.068 "allow_accel_sequence": false, 00:23:26.068 "rdma_max_cq_size": 0, 00:23:26.068 "rdma_cm_event_timeout_ms": 0, 00:23:26.068 "dhchap_digests": [ 00:23:26.068 "sha256", 00:23:26.068 "sha384", 00:23:26.068 "sha512" 00:23:26.068 ], 00:23:26.068 "dhchap_dhgroups": [ 00:23:26.068 "null", 00:23:26.068 "ffdhe2048", 00:23:26.068 "ffdhe3072", 00:23:26.068 "ffdhe4096", 00:23:26.068 "ffdhe6144", 00:23:26.068 "ffdhe8192" 00:23:26.068 ] 00:23:26.068 } 00:23:26.068 }, 00:23:26.068 { 00:23:26.068 "method": "bdev_nvme_attach_controller", 00:23:26.068 "params": { 00:23:26.068 "name": "TLSTEST", 00:23:26.068 "trtype": "TCP", 00:23:26.068 "adrfam": "IPv4", 00:23:26.068 "traddr": "10.0.0.2", 00:23:26.068 "trsvcid": "4420", 00:23:26.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.068 "prchk_reftag": false, 00:23:26.068 "prchk_guard": false, 00:23:26.068 "ctrlr_loss_timeout_sec": 0, 00:23:26.068 "reconnect_delay_sec": 0, 00:23:26.068 "fast_io_fail_timeout_sec": 0, 00:23:26.068 "psk": "/tmp/tmp.3mTVh2ia4q", 00:23:26.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.068 "hdgst": false, 00:23:26.068 "ddgst": false 00:23:26.068 } 00:23:26.068 }, 00:23:26.068 { 00:23:26.068 "method": "bdev_nvme_set_hotplug", 00:23:26.068 "params": { 00:23:26.068 "period_us": 100000, 00:23:26.068 "enable": false 00:23:26.068 } 00:23:26.068 }, 00:23:26.068 { 00:23:26.068 "method": "bdev_wait_for_examine" 00:23:26.068 } 00:23:26.068 ] 00:23:26.068 }, 00:23:26.068 { 00:23:26.068 "subsystem": "nbd", 00:23:26.068 "config": [] 00:23:26.068 } 00:23:26.069 ] 00:23:26.069 }' 00:23:26.069 00:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1445775 00:23:26.069 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1445775 ']' 00:23:26.069 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1445775 00:23:26.069 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:26.069 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.069 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1445775 00:23:26.069 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:26.069 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:26.069 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1445775' 00:23:26.069 killing process with pid 1445775 00:23:26.069 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1445775 00:23:26.069 Received shutdown signal, test time was about 10.000000 seconds 00:23:26.069 00:23:26.069 Latency(us) 00:23:26.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.069 =================================================================================================================== 00:23:26.069 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:26.069 [2024-07-13 00:48:37.515608] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:26.069 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1445775 00:23:26.328 00:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1445514 00:23:26.328 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1445514 ']' 00:23:26.328 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1445514 00:23:26.328 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:26.328 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:26.328 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1445514 00:23:26.328 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:26.328 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:26.328 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1445514' 00:23:26.328 killing process with pid 1445514 00:23:26.328 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1445514 00:23:26.328 [2024-07-13 00:48:37.732862] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:26.328 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1445514 00:23:26.587 00:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:26.587 00:48:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:26.587 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:26.587 00:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:26.587 "subsystems": [ 00:23:26.587 { 00:23:26.587 "subsystem": "keyring", 00:23:26.587 "config": [] 00:23:26.587 }, 00:23:26.587 { 00:23:26.587 "subsystem": "iobuf", 00:23:26.587 "config": [ 00:23:26.587 { 00:23:26.587 "method": "iobuf_set_options", 00:23:26.587 "params": { 00:23:26.587 "small_pool_count": 8192, 00:23:26.587 "large_pool_count": 1024, 00:23:26.587 "small_bufsize": 8192, 00:23:26.587 "large_bufsize": 135168 00:23:26.587 } 00:23:26.587 } 00:23:26.587 ] 00:23:26.587 }, 00:23:26.587 { 00:23:26.587 "subsystem": "sock", 00:23:26.587 "config": [ 00:23:26.587 { 00:23:26.587 "method": "sock_set_default_impl", 00:23:26.587 "params": { 00:23:26.587 "impl_name": "posix" 00:23:26.587 } 00:23:26.587 }, 00:23:26.587 { 00:23:26.587 "method": "sock_impl_set_options", 00:23:26.587 "params": { 00:23:26.587 "impl_name": "ssl", 00:23:26.587 "recv_buf_size": 4096, 00:23:26.587 "send_buf_size": 4096, 00:23:26.587 "enable_recv_pipe": true, 00:23:26.587 "enable_quickack": false, 00:23:26.587 "enable_placement_id": 0, 00:23:26.587 "enable_zerocopy_send_server": true, 00:23:26.587 "enable_zerocopy_send_client": false, 00:23:26.587 "zerocopy_threshold": 0, 00:23:26.587 "tls_version": 0, 00:23:26.587 "enable_ktls": false 00:23:26.587 } 00:23:26.587 }, 00:23:26.587 { 00:23:26.587 "method": "sock_impl_set_options", 00:23:26.587 "params": { 00:23:26.587 "impl_name": "posix", 00:23:26.587 "recv_buf_size": 2097152, 00:23:26.588 "send_buf_size": 2097152, 00:23:26.588 "enable_recv_pipe": true, 00:23:26.588 "enable_quickack": false, 00:23:26.588 "enable_placement_id": 0, 00:23:26.588 "enable_zerocopy_send_server": true, 00:23:26.588 "enable_zerocopy_send_client": false, 00:23:26.588 "zerocopy_threshold": 0, 00:23:26.588 "tls_version": 0, 00:23:26.588 "enable_ktls": false 00:23:26.588 } 00:23:26.588 } 00:23:26.588 ] 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "subsystem": "vmd", 00:23:26.588 "config": [] 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "subsystem": "accel", 00:23:26.588 "config": [ 00:23:26.588 { 00:23:26.588 "method": "accel_set_options", 00:23:26.588 "params": { 00:23:26.588 "small_cache_size": 128, 00:23:26.588 "large_cache_size": 16, 00:23:26.588 "task_count": 2048, 00:23:26.588 "sequence_count": 2048, 00:23:26.588 "buf_count": 2048 00:23:26.588 } 00:23:26.588 } 00:23:26.588 ] 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "subsystem": "bdev", 00:23:26.588 "config": [ 00:23:26.588 { 00:23:26.588 "method": "bdev_set_options", 00:23:26.588 "params": { 00:23:26.588 "bdev_io_pool_size": 65535, 00:23:26.588 "bdev_io_cache_size": 256, 00:23:26.588 "bdev_auto_examine": true, 00:23:26.588 "iobuf_small_cache_size": 128, 00:23:26.588 "iobuf_large_cache_size": 16 00:23:26.588 } 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "method": "bdev_raid_set_options", 00:23:26.588 "params": { 00:23:26.588 "process_window_size_kb": 1024 00:23:26.588 } 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "method": "bdev_iscsi_set_options", 00:23:26.588 "params": { 00:23:26.588 "timeout_sec": 30 00:23:26.588 } 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "method": "bdev_nvme_set_options", 00:23:26.588 "params": { 00:23:26.588 "action_on_timeout": "none", 00:23:26.588 "timeout_us": 0, 00:23:26.588 "timeout_admin_us": 0, 00:23:26.588 "keep_alive_timeout_ms": 10000, 00:23:26.588 "arbitration_burst": 0, 00:23:26.588 "low_priority_weight": 0, 00:23:26.588 "medium_priority_weight": 0, 00:23:26.588 "high_priority_weight": 0, 00:23:26.588 "nvme_adminq_poll_period_us": 10000, 00:23:26.588 "nvme_ioq_poll_period_us": 0, 00:23:26.588 "io_queue_requests": 0, 00:23:26.588 "delay_cmd_submit": true, 00:23:26.588 "transport_retry_count": 4, 00:23:26.588 "bdev_retry_count": 3, 00:23:26.588 "transport_ack_timeout": 0, 00:23:26.588 "ctrlr_loss_timeout_sec": 0, 00:23:26.588 "reconnect_delay_sec": 0, 00:23:26.588 "fast_io_fail_timeout_sec": 0, 00:23:26.588 "disable_auto_failback": false, 00:23:26.588 "generate_uuids": false, 00:23:26.588 "transport_tos": 0, 00:23:26.588 "nvme_error_stat": false, 00:23:26.588 "rdma_srq_size": 0, 00:23:26.588 "io_path_stat": false, 00:23:26.588 "allow_accel_sequence": false, 00:23:26.588 "rdma_max_cq_size": 0, 00:23:26.588 "rdma_cm_event_timeout_ms": 0, 00:23:26.588 "dhchap_digests": [ 00:23:26.588 "sha256", 00:23:26.588 "sha384", 00:23:26.588 "sha512" 00:23:26.588 ], 00:23:26.588 "dhchap_dhgroups": [ 00:23:26.588 "null", 00:23:26.588 "ffdhe2048", 00:23:26.588 "ffdhe3072", 00:23:26.588 "ffdhe4096", 00:23:26.588 "ffdhe6144", 00:23:26.588 "ffdhe8192" 00:23:26.588 ] 00:23:26.588 } 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "method": "bdev_nvme_set_hotplug", 00:23:26.588 "params": { 00:23:26.588 "period_us": 100000, 00:23:26.588 "enable": false 00:23:26.588 } 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "method": "bdev_malloc_create", 00:23:26.588 "params": { 00:23:26.588 "name": "malloc0", 00:23:26.588 "num_blocks": 8192, 00:23:26.588 "block_size": 4096, 00:23:26.588 "physical_block_size": 4096, 00:23:26.588 "uuid": "58a8ace4-3d46-4036-8411-7161931f55d9", 00:23:26.588 "optimal_io_boundary": 0 00:23:26.588 } 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "method": "bdev_wait_for_examine" 00:23:26.588 } 00:23:26.588 ] 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "subsystem": "nbd", 00:23:26.588 "config": [] 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "subsystem": "scheduler", 00:23:26.588 "config": [ 00:23:26.588 { 00:23:26.588 "method": "framework_set_scheduler", 00:23:26.588 "params": { 00:23:26.588 "name": "static" 00:23:26.588 } 00:23:26.588 } 00:23:26.588 ] 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "subsystem": "nvmf", 00:23:26.588 "config": [ 00:23:26.588 { 00:23:26.588 "method": "nvmf_set_config", 00:23:26.588 "params": { 00:23:26.588 "discovery_filter": "match_any", 00:23:26.588 "admin_cmd_passthru": { 00:23:26.588 "identify_ctrlr": false 00:23:26.588 } 00:23:26.588 } 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "method": "nvmf_set_max_subsystems", 00:23:26.588 "params": { 00:23:26.588 "max_subsystems": 1024 00:23:26.588 } 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "method": "nvmf_set_crdt", 00:23:26.588 "params": { 00:23:26.588 "crdt1": 0, 00:23:26.588 "crdt2": 0, 00:23:26.588 "crdt3": 0 00:23:26.588 } 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "method": "nvmf_create_transport", 00:23:26.588 "params": { 00:23:26.588 "trtype": "TCP", 00:23:26.588 "max_queue_depth": 128, 00:23:26.588 "max_io_qpairs_per_ctrlr": 127, 00:23:26.588 "in_capsule_data_size": 4096, 00:23:26.588 "max_io_size": 131072, 00:23:26.588 "io_unit_size": 131072, 00:23:26.588 "max_aq_depth": 128, 00:23:26.588 "num_shared_buffers": 511, 00:23:26.588 "buf_cache_size": 4294967295, 00:23:26.588 "dif_insert_or_strip": false, 00:23:26.588 "zcopy": false, 00:23:26.588 "c2h_success": false, 00:23:26.588 "sock_priority": 0, 00:23:26.588 "abort_timeout_sec": 1, 00:23:26.588 "ack_timeout": 0, 00:23:26.588 "data_wr_pool_size": 0 00:23:26.588 } 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "method": "nvmf_create_subsystem", 00:23:26.588 "params": { 00:23:26.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.588 "allow_any_host": false, 00:23:26.588 "serial_number": "SPDK00000000000001", 00:23:26.588 "model_number": "SPDK bdev Controller", 00:23:26.588 "max_namespaces": 10, 00:23:26.588 "min_cntlid": 1, 00:23:26.588 "max_cntlid": 65519, 00:23:26.588 "ana_reporting": false 00:23:26.588 } 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "method": "nvmf_subsystem_add_host", 00:23:26.588 "params": { 00:23:26.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.588 "host": "nqn.2016-06.io.spdk:host1", 00:23:26.588 "psk": "/tmp/tmp.3mTVh2ia4q" 00:23:26.588 } 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "method": "nvmf_subsystem_add_ns", 00:23:26.588 "params": { 00:23:26.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.588 "namespace": { 00:23:26.588 "nsid": 1, 00:23:26.588 "bdev_name": "malloc0", 00:23:26.588 "nguid": "58A8ACE43D46403684117161931F55D9", 00:23:26.588 "uuid": "58a8ace4-3d46-4036-8411-7161931f55d9", 00:23:26.588 "no_auto_visible": false 00:23:26.588 } 00:23:26.588 } 00:23:26.588 }, 00:23:26.588 { 00:23:26.588 "method": "nvmf_subsystem_add_listener", 00:23:26.588 "params": { 00:23:26.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.588 "listen_address": { 00:23:26.588 "trtype": "TCP", 00:23:26.588 "adrfam": "IPv4", 00:23:26.588 "traddr": "10.0.0.2", 00:23:26.588 "trsvcid": "4420" 00:23:26.588 }, 00:23:26.588 "secure_channel": true 00:23:26.588 } 00:23:26.588 } 00:23:26.588 ] 00:23:26.588 } 00:23:26.588 ] 00:23:26.588 }' 00:23:26.588 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.588 00:48:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1446030 00:23:26.588 00:48:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:26.588 00:48:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1446030 00:23:26.588 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1446030 ']' 00:23:26.588 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.588 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.588 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.589 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.589 00:48:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.589 [2024-07-13 00:48:37.968510] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:26.589 [2024-07-13 00:48:37.968555] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.589 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.589 [2024-07-13 00:48:38.023937] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.589 [2024-07-13 00:48:38.063474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.589 [2024-07-13 00:48:38.063513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.589 [2024-07-13 00:48:38.063520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.589 [2024-07-13 00:48:38.063526] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.589 [2024-07-13 00:48:38.063531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.589 [2024-07-13 00:48:38.063583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.848 [2024-07-13 00:48:38.260732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.848 [2024-07-13 00:48:38.276686] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:26.848 [2024-07-13 00:48:38.292743] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:26.848 [2024-07-13 00:48:38.301519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.417 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.417 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:27.417 00:48:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:27.417 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:27.417 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.417 00:48:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.417 00:48:38 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1446272 00:23:27.417 00:48:38 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1446272 /var/tmp/bdevperf.sock 00:23:27.417 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1446272 ']' 00:23:27.417 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.417 00:48:38 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:27.417 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.417 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.417 00:48:38 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:27.417 "subsystems": [ 00:23:27.417 { 00:23:27.417 "subsystem": "keyring", 00:23:27.417 "config": [] 00:23:27.417 }, 00:23:27.417 { 00:23:27.417 "subsystem": "iobuf", 00:23:27.417 "config": [ 00:23:27.417 { 00:23:27.417 "method": "iobuf_set_options", 00:23:27.417 "params": { 00:23:27.417 "small_pool_count": 8192, 00:23:27.417 "large_pool_count": 1024, 00:23:27.417 "small_bufsize": 8192, 00:23:27.417 "large_bufsize": 135168 00:23:27.417 } 00:23:27.417 } 00:23:27.417 ] 00:23:27.417 }, 00:23:27.417 { 00:23:27.417 "subsystem": "sock", 00:23:27.417 "config": [ 00:23:27.417 { 00:23:27.417 "method": "sock_set_default_impl", 00:23:27.417 "params": { 00:23:27.417 "impl_name": "posix" 00:23:27.417 } 00:23:27.417 }, 00:23:27.417 { 00:23:27.417 "method": "sock_impl_set_options", 00:23:27.417 "params": { 00:23:27.417 "impl_name": "ssl", 00:23:27.417 "recv_buf_size": 4096, 00:23:27.417 "send_buf_size": 4096, 00:23:27.417 "enable_recv_pipe": true, 00:23:27.417 "enable_quickack": false, 00:23:27.417 "enable_placement_id": 0, 00:23:27.417 "enable_zerocopy_send_server": true, 00:23:27.417 "enable_zerocopy_send_client": false, 00:23:27.417 "zerocopy_threshold": 0, 00:23:27.417 "tls_version": 0, 00:23:27.417 "enable_ktls": false 00:23:27.417 } 00:23:27.417 }, 00:23:27.417 { 00:23:27.417 "method": "sock_impl_set_options", 00:23:27.417 "params": { 00:23:27.417 "impl_name": "posix", 00:23:27.417 "recv_buf_size": 2097152, 00:23:27.417 "send_buf_size": 2097152, 00:23:27.417 "enable_recv_pipe": true, 00:23:27.417 "enable_quickack": false, 00:23:27.417 "enable_placement_id": 0, 00:23:27.417 "enable_zerocopy_send_server": true, 00:23:27.417 "enable_zerocopy_send_client": false, 00:23:27.417 "zerocopy_threshold": 0, 00:23:27.417 "tls_version": 0, 00:23:27.417 "enable_ktls": false 00:23:27.417 } 00:23:27.417 } 00:23:27.417 ] 00:23:27.417 }, 00:23:27.417 { 00:23:27.417 "subsystem": "vmd", 00:23:27.417 "config": [] 00:23:27.417 }, 00:23:27.417 { 00:23:27.417 "subsystem": "accel", 00:23:27.417 "config": [ 00:23:27.417 { 00:23:27.417 "method": "accel_set_options", 00:23:27.417 "params": { 00:23:27.417 "small_cache_size": 128, 00:23:27.417 "large_cache_size": 16, 00:23:27.417 "task_count": 2048, 00:23:27.417 "sequence_count": 2048, 00:23:27.417 "buf_count": 2048 00:23:27.417 } 00:23:27.417 } 00:23:27.417 ] 00:23:27.417 }, 00:23:27.417 { 00:23:27.417 "subsystem": "bdev", 00:23:27.417 "config": [ 00:23:27.417 { 00:23:27.417 "method": "bdev_set_options", 00:23:27.417 "params": { 00:23:27.417 "bdev_io_pool_size": 65535, 00:23:27.417 "bdev_io_cache_size": 256, 00:23:27.417 "bdev_auto_examine": true, 00:23:27.417 "iobuf_small_cache_size": 128, 00:23:27.417 "iobuf_large_cache_size": 16 00:23:27.417 } 00:23:27.417 }, 00:23:27.417 { 00:23:27.417 "method": "bdev_raid_set_options", 00:23:27.417 "params": { 00:23:27.417 "process_window_size_kb": 1024 00:23:27.417 } 00:23:27.417 }, 00:23:27.417 { 00:23:27.417 "method": "bdev_iscsi_set_options", 00:23:27.417 "params": { 00:23:27.417 "timeout_sec": 30 00:23:27.417 } 00:23:27.417 }, 00:23:27.417 { 00:23:27.417 "method": "bdev_nvme_set_options", 00:23:27.417 "params": { 00:23:27.417 "action_on_timeout": "none", 00:23:27.417 "timeout_us": 0, 00:23:27.417 "timeout_admin_us": 0, 00:23:27.417 "keep_alive_timeout_ms": 10000, 00:23:27.417 "arbitration_burst": 0, 00:23:27.417 "low_priority_weight": 0, 00:23:27.417 "medium_priority_weight": 0, 00:23:27.417 "high_priority_weight": 0, 00:23:27.417 "nvme_adminq_poll_period_us": 10000, 00:23:27.417 "nvme_ioq_poll_period_us": 0, 00:23:27.417 "io_queue_requests": 512, 00:23:27.417 "delay_cmd_submit": true, 00:23:27.417 "transport_retry_count": 4, 00:23:27.417 "bdev_retry_count": 3, 00:23:27.417 "transport_ack_timeout": 0, 00:23:27.418 "ctrlr_loss_timeout_sec": 0, 00:23:27.418 "reconnect_delay_sec": 0, 00:23:27.418 "fast_io_fail_timeout_sec": 0, 00:23:27.418 "disable_auto_failback": false, 00:23:27.418 "generate_uuids": false, 00:23:27.418 "transport_tos": 0, 00:23:27.418 "nvme_error_stat": false, 00:23:27.418 "rdma_srq_size": 0, 00:23:27.418 "io_path_stat": false, 00:23:27.418 "allow_accel_sequence": false, 00:23:27.418 "rdma_max_cq_size": 0, 00:23:27.418 "rdma_cm_event_timeout_ms": 0, 00:23:27.418 "dhchap_digests": [ 00:23:27.418 "sha256", 00:23:27.418 "sha384", 00:23:27.418 "sha512" 00:23:27.418 ], 00:23:27.418 "dhchap_dhgroups": [ 00:23:27.418 "null", 00:23:27.418 "ffdhe2048", 00:23:27.418 "ffdhe3072", 00:23:27.418 "ffdhe4096", 00:23:27.418 "ffdhe6144", 00:23:27.418 "ffdhe8192" 00:23:27.418 ] 00:23:27.418 } 00:23:27.418 }, 00:23:27.418 { 00:23:27.418 "method": "bdev_nvme_attach_controller", 00:23:27.418 "params": { 00:23:27.418 "name": "TLSTEST", 00:23:27.418 "trtype": "TCP", 00:23:27.418 "adrfam": "IPv4", 00:23:27.418 "traddr": "10.0.0.2", 00:23:27.418 "trsvcid": "4420", 00:23:27.418 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.418 "prchk_reftag": false, 00:23:27.418 "prchk_guard": false, 00:23:27.418 "ctrlr_loss_timeout_sec": 0, 00:23:27.418 "reconnect_delay_sec": 0, 00:23:27.418 "fast_io_fail_timeout_sec": 0, 00:23:27.418 "psk": "/tmp/tmp.3mTVh2ia4q", 00:23:27.418 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.418 "hdgst": false, 00:23:27.418 "ddgst": false 00:23:27.418 } 00:23:27.418 }, 00:23:27.418 { 00:23:27.418 "method": "bdev_nvme_set_hotplug", 00:23:27.418 "params": { 00:23:27.418 "period_us": 100000, 00:23:27.418 "enable": false 00:23:27.418 } 00:23:27.418 }, 00:23:27.418 { 00:23:27.418 "method": "bdev_wait_for_examine" 00:23:27.418 } 00:23:27.418 ] 00:23:27.418 }, 00:23:27.418 { 00:23:27.418 "subsystem": "nbd", 00:23:27.418 "config": [] 00:23:27.418 } 00:23:27.418 ] 00:23:27.418 }' 00:23:27.418 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.418 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.418 [2024-07-13 00:48:38.886860] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:27.418 [2024-07-13 00:48:38.886918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1446272 ] 00:23:27.418 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.418 [2024-07-13 00:48:38.953183] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.677 [2024-07-13 00:48:38.992883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.677 [2024-07-13 00:48:39.130574] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.677 [2024-07-13 00:48:39.130649] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:28.246 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.246 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:28.246 00:48:39 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:28.246 Running I/O for 10 seconds... 00:23:40.442 00:23:40.442 Latency(us) 00:23:40.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.442 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:40.442 Verification LBA range: start 0x0 length 0x2000 00:23:40.442 TLSTESTn1 : 10.02 5464.61 21.35 0.00 0.00 23385.12 4900.95 22681.15 00:23:40.442 =================================================================================================================== 00:23:40.442 Total : 5464.61 21.35 0.00 0.00 23385.12 4900.95 22681.15 00:23:40.442 0 00:23:40.442 00:48:49 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:40.442 00:48:49 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1446272 00:23:40.442 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1446272 ']' 00:23:40.442 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1446272 00:23:40.442 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:40.442 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.442 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1446272 00:23:40.442 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:40.442 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:40.442 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1446272' 00:23:40.442 killing process with pid 1446272 00:23:40.442 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1446272 00:23:40.442 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.442 00:23:40.442 Latency(us) 00:23:40.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.442 =================================================================================================================== 00:23:40.442 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.442 [2024-07-13 00:48:49.897895] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:40.442 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1446272 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1446030 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1446030 ']' 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1446030 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1446030 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1446030' 00:23:40.442 killing process with pid 1446030 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1446030 00:23:40.442 [2024-07-13 00:48:50.115881] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1446030 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1448113 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1448113 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1448113 ']' 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.442 [2024-07-13 00:48:50.350132] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:40.442 [2024-07-13 00:48:50.350178] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.442 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.442 [2024-07-13 00:48:50.416779] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.442 [2024-07-13 00:48:50.456424] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.442 [2024-07-13 00:48:50.456477] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.442 [2024-07-13 00:48:50.456485] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.442 [2024-07-13 00:48:50.456491] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.442 [2024-07-13 00:48:50.456499] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.442 [2024-07-13 00:48:50.456533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.3mTVh2ia4q 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3mTVh2ia4q 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:40.442 [2024-07-13 00:48:50.728311] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:40.442 00:48:50 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:40.442 [2024-07-13 00:48:51.109301] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.442 [2024-07-13 00:48:51.109474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.442 00:48:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:40.442 malloc0 00:23:40.442 00:48:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:40.442 00:48:51 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3mTVh2ia4q 00:23:40.442 [2024-07-13 00:48:51.658763] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:40.442 00:48:51 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:40.442 00:48:51 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1448362 00:23:40.442 00:48:51 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:40.442 00:48:51 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1448362 /var/tmp/bdevperf.sock 00:23:40.442 00:48:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1448362 ']' 00:23:40.442 00:48:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.442 00:48:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.442 00:48:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.443 00:48:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.443 00:48:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.443 [2024-07-13 00:48:51.731412] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:40.443 [2024-07-13 00:48:51.731455] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448362 ] 00:23:40.443 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.443 [2024-07-13 00:48:51.799989] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.443 [2024-07-13 00:48:51.839901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.009 00:48:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.009 00:48:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:41.009 00:48:52 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3mTVh2ia4q 00:23:41.267 00:48:52 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:41.524 [2024-07-13 00:48:52.882491] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.524 nvme0n1 00:23:41.524 00:48:52 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:41.524 Running I/O for 1 seconds... 00:23:42.899 00:23:42.899 Latency(us) 00:23:42.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.899 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:42.899 Verification LBA range: start 0x0 length 0x2000 00:23:42.899 nvme0n1 : 1.01 5362.66 20.95 0.00 0.00 23701.64 5128.90 24162.84 00:23:42.899 =================================================================================================================== 00:23:42.899 Total : 5362.66 20.95 0.00 0.00 23701.64 5128.90 24162.84 00:23:42.899 0 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1448362 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1448362 ']' 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1448362 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1448362 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1448362' 00:23:42.899 killing process with pid 1448362 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1448362 00:23:42.899 Received shutdown signal, test time was about 1.000000 seconds 00:23:42.899 00:23:42.899 Latency(us) 00:23:42.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.899 =================================================================================================================== 00:23:42.899 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1448362 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1448113 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1448113 ']' 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1448113 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1448113 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1448113' 00:23:42.899 killing process with pid 1448113 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1448113 00:23:42.899 [2024-07-13 00:48:54.366464] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:42.899 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1448113 00:23:43.157 00:48:54 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:43.157 00:48:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:43.157 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:43.157 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.157 00:48:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1448840 00:23:43.157 00:48:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1448840 00:23:43.157 00:48:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:43.157 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1448840 ']' 00:23:43.157 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.157 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.157 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.157 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.157 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.157 [2024-07-13 00:48:54.599324] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:43.157 [2024-07-13 00:48:54.599370] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.157 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.157 [2024-07-13 00:48:54.665446] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.416 [2024-07-13 00:48:54.719846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.416 [2024-07-13 00:48:54.719890] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.416 [2024-07-13 00:48:54.719903] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.416 [2024-07-13 00:48:54.719913] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.416 [2024-07-13 00:48:54.719922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.416 [2024-07-13 00:48:54.719948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.416 [2024-07-13 00:48:54.848754] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.416 malloc0 00:23:43.416 [2024-07-13 00:48:54.876807] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:43.416 [2024-07-13 00:48:54.876986] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1448860 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1448860 /var/tmp/bdevperf.sock 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1448860 ']' 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.416 00:48:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.416 [2024-07-13 00:48:54.949325] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:43.416 [2024-07-13 00:48:54.949364] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448860 ] 00:23:43.416 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.674 [2024-07-13 00:48:55.014520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.674 [2024-07-13 00:48:55.054067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.674 00:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.674 00:48:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:43.674 00:48:55 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3mTVh2ia4q 00:23:43.932 00:48:55 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:43.932 [2024-07-13 00:48:55.479367] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:44.191 nvme0n1 00:23:44.191 00:48:55 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:44.191 Running I/O for 1 seconds... 00:23:45.123 00:23:45.123 Latency(us) 00:23:45.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.123 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:45.123 Verification LBA range: start 0x0 length 0x2000 00:23:45.123 nvme0n1 : 1.02 5289.24 20.66 0.00 0.00 23990.22 4872.46 23706.94 00:23:45.123 =================================================================================================================== 00:23:45.123 Total : 5289.24 20.66 0.00 0.00 23990.22 4872.46 23706.94 00:23:45.123 0 00:23:45.382 00:48:56 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:45.382 00:48:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.382 00:48:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.382 00:48:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.382 00:48:56 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:45.382 "subsystems": [ 00:23:45.382 { 00:23:45.382 "subsystem": "keyring", 00:23:45.382 "config": [ 00:23:45.382 { 00:23:45.382 "method": "keyring_file_add_key", 00:23:45.382 "params": { 00:23:45.382 "name": "key0", 00:23:45.382 "path": "/tmp/tmp.3mTVh2ia4q" 00:23:45.382 } 00:23:45.382 } 00:23:45.382 ] 00:23:45.382 }, 00:23:45.382 { 00:23:45.382 "subsystem": "iobuf", 00:23:45.382 "config": [ 00:23:45.382 { 00:23:45.382 "method": "iobuf_set_options", 00:23:45.382 "params": { 00:23:45.382 "small_pool_count": 8192, 00:23:45.382 "large_pool_count": 1024, 00:23:45.382 "small_bufsize": 8192, 00:23:45.382 "large_bufsize": 135168 00:23:45.382 } 00:23:45.382 } 00:23:45.382 ] 00:23:45.382 }, 00:23:45.382 { 00:23:45.382 "subsystem": "sock", 00:23:45.382 "config": [ 00:23:45.382 { 00:23:45.382 "method": "sock_set_default_impl", 00:23:45.382 "params": { 00:23:45.382 "impl_name": "posix" 00:23:45.382 } 00:23:45.382 }, 00:23:45.382 { 00:23:45.382 "method": "sock_impl_set_options", 00:23:45.382 "params": { 00:23:45.382 "impl_name": "ssl", 00:23:45.382 "recv_buf_size": 4096, 00:23:45.382 "send_buf_size": 4096, 00:23:45.382 "enable_recv_pipe": true, 00:23:45.382 "enable_quickack": false, 00:23:45.382 "enable_placement_id": 0, 00:23:45.382 "enable_zerocopy_send_server": true, 00:23:45.382 "enable_zerocopy_send_client": false, 00:23:45.382 "zerocopy_threshold": 0, 00:23:45.382 "tls_version": 0, 00:23:45.382 "enable_ktls": false 00:23:45.382 } 00:23:45.382 }, 00:23:45.382 { 00:23:45.382 "method": "sock_impl_set_options", 00:23:45.382 "params": { 00:23:45.382 "impl_name": "posix", 00:23:45.382 "recv_buf_size": 2097152, 00:23:45.382 "send_buf_size": 2097152, 00:23:45.382 "enable_recv_pipe": true, 00:23:45.382 "enable_quickack": false, 00:23:45.382 "enable_placement_id": 0, 00:23:45.382 "enable_zerocopy_send_server": true, 00:23:45.382 "enable_zerocopy_send_client": false, 00:23:45.382 "zerocopy_threshold": 0, 00:23:45.382 "tls_version": 0, 00:23:45.382 "enable_ktls": false 00:23:45.382 } 00:23:45.382 } 00:23:45.382 ] 00:23:45.382 }, 00:23:45.382 { 00:23:45.382 "subsystem": "vmd", 00:23:45.382 "config": [] 00:23:45.382 }, 00:23:45.382 { 00:23:45.382 "subsystem": "accel", 00:23:45.382 "config": [ 00:23:45.382 { 00:23:45.382 "method": "accel_set_options", 00:23:45.382 "params": { 00:23:45.382 "small_cache_size": 128, 00:23:45.383 "large_cache_size": 16, 00:23:45.383 "task_count": 2048, 00:23:45.383 "sequence_count": 2048, 00:23:45.383 "buf_count": 2048 00:23:45.383 } 00:23:45.383 } 00:23:45.383 ] 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "subsystem": "bdev", 00:23:45.383 "config": [ 00:23:45.383 { 00:23:45.383 "method": "bdev_set_options", 00:23:45.383 "params": { 00:23:45.383 "bdev_io_pool_size": 65535, 00:23:45.383 "bdev_io_cache_size": 256, 00:23:45.383 "bdev_auto_examine": true, 00:23:45.383 "iobuf_small_cache_size": 128, 00:23:45.383 "iobuf_large_cache_size": 16 00:23:45.383 } 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "method": "bdev_raid_set_options", 00:23:45.383 "params": { 00:23:45.383 "process_window_size_kb": 1024 00:23:45.383 } 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "method": "bdev_iscsi_set_options", 00:23:45.383 "params": { 00:23:45.383 "timeout_sec": 30 00:23:45.383 } 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "method": "bdev_nvme_set_options", 00:23:45.383 "params": { 00:23:45.383 "action_on_timeout": "none", 00:23:45.383 "timeout_us": 0, 00:23:45.383 "timeout_admin_us": 0, 00:23:45.383 "keep_alive_timeout_ms": 10000, 00:23:45.383 "arbitration_burst": 0, 00:23:45.383 "low_priority_weight": 0, 00:23:45.383 "medium_priority_weight": 0, 00:23:45.383 "high_priority_weight": 0, 00:23:45.383 "nvme_adminq_poll_period_us": 10000, 00:23:45.383 "nvme_ioq_poll_period_us": 0, 00:23:45.383 "io_queue_requests": 0, 00:23:45.383 "delay_cmd_submit": true, 00:23:45.383 "transport_retry_count": 4, 00:23:45.383 "bdev_retry_count": 3, 00:23:45.383 "transport_ack_timeout": 0, 00:23:45.383 "ctrlr_loss_timeout_sec": 0, 00:23:45.383 "reconnect_delay_sec": 0, 00:23:45.383 "fast_io_fail_timeout_sec": 0, 00:23:45.383 "disable_auto_failback": false, 00:23:45.383 "generate_uuids": false, 00:23:45.383 "transport_tos": 0, 00:23:45.383 "nvme_error_stat": false, 00:23:45.383 "rdma_srq_size": 0, 00:23:45.383 "io_path_stat": false, 00:23:45.383 "allow_accel_sequence": false, 00:23:45.383 "rdma_max_cq_size": 0, 00:23:45.383 "rdma_cm_event_timeout_ms": 0, 00:23:45.383 "dhchap_digests": [ 00:23:45.383 "sha256", 00:23:45.383 "sha384", 00:23:45.383 "sha512" 00:23:45.383 ], 00:23:45.383 "dhchap_dhgroups": [ 00:23:45.383 "null", 00:23:45.383 "ffdhe2048", 00:23:45.383 "ffdhe3072", 00:23:45.383 "ffdhe4096", 00:23:45.383 "ffdhe6144", 00:23:45.383 "ffdhe8192" 00:23:45.383 ] 00:23:45.383 } 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "method": "bdev_nvme_set_hotplug", 00:23:45.383 "params": { 00:23:45.383 "period_us": 100000, 00:23:45.383 "enable": false 00:23:45.383 } 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "method": "bdev_malloc_create", 00:23:45.383 "params": { 00:23:45.383 "name": "malloc0", 00:23:45.383 "num_blocks": 8192, 00:23:45.383 "block_size": 4096, 00:23:45.383 "physical_block_size": 4096, 00:23:45.383 "uuid": "80aac057-34cf-4123-835b-99fa143e530a", 00:23:45.383 "optimal_io_boundary": 0 00:23:45.383 } 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "method": "bdev_wait_for_examine" 00:23:45.383 } 00:23:45.383 ] 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "subsystem": "nbd", 00:23:45.383 "config": [] 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "subsystem": "scheduler", 00:23:45.383 "config": [ 00:23:45.383 { 00:23:45.383 "method": "framework_set_scheduler", 00:23:45.383 "params": { 00:23:45.383 "name": "static" 00:23:45.383 } 00:23:45.383 } 00:23:45.383 ] 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "subsystem": "nvmf", 00:23:45.383 "config": [ 00:23:45.383 { 00:23:45.383 "method": "nvmf_set_config", 00:23:45.383 "params": { 00:23:45.383 "discovery_filter": "match_any", 00:23:45.383 "admin_cmd_passthru": { 00:23:45.383 "identify_ctrlr": false 00:23:45.383 } 00:23:45.383 } 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "method": "nvmf_set_max_subsystems", 00:23:45.383 "params": { 00:23:45.383 "max_subsystems": 1024 00:23:45.383 } 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "method": "nvmf_set_crdt", 00:23:45.383 "params": { 00:23:45.383 "crdt1": 0, 00:23:45.383 "crdt2": 0, 00:23:45.383 "crdt3": 0 00:23:45.383 } 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "method": "nvmf_create_transport", 00:23:45.383 "params": { 00:23:45.383 "trtype": "TCP", 00:23:45.383 "max_queue_depth": 128, 00:23:45.383 "max_io_qpairs_per_ctrlr": 127, 00:23:45.383 "in_capsule_data_size": 4096, 00:23:45.383 "max_io_size": 131072, 00:23:45.383 "io_unit_size": 131072, 00:23:45.383 "max_aq_depth": 128, 00:23:45.383 "num_shared_buffers": 511, 00:23:45.383 "buf_cache_size": 4294967295, 00:23:45.383 "dif_insert_or_strip": false, 00:23:45.383 "zcopy": false, 00:23:45.383 "c2h_success": false, 00:23:45.383 "sock_priority": 0, 00:23:45.383 "abort_timeout_sec": 1, 00:23:45.383 "ack_timeout": 0, 00:23:45.383 "data_wr_pool_size": 0 00:23:45.383 } 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "method": "nvmf_create_subsystem", 00:23:45.383 "params": { 00:23:45.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.383 "allow_any_host": false, 00:23:45.383 "serial_number": "00000000000000000000", 00:23:45.383 "model_number": "SPDK bdev Controller", 00:23:45.383 "max_namespaces": 32, 00:23:45.383 "min_cntlid": 1, 00:23:45.383 "max_cntlid": 65519, 00:23:45.383 "ana_reporting": false 00:23:45.383 } 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "method": "nvmf_subsystem_add_host", 00:23:45.383 "params": { 00:23:45.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.383 "host": "nqn.2016-06.io.spdk:host1", 00:23:45.383 "psk": "key0" 00:23:45.383 } 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "method": "nvmf_subsystem_add_ns", 00:23:45.383 "params": { 00:23:45.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.383 "namespace": { 00:23:45.383 "nsid": 1, 00:23:45.383 "bdev_name": "malloc0", 00:23:45.383 "nguid": "80AAC05734CF4123835B99FA143E530A", 00:23:45.383 "uuid": "80aac057-34cf-4123-835b-99fa143e530a", 00:23:45.383 "no_auto_visible": false 00:23:45.383 } 00:23:45.383 } 00:23:45.383 }, 00:23:45.383 { 00:23:45.383 "method": "nvmf_subsystem_add_listener", 00:23:45.383 "params": { 00:23:45.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.383 "listen_address": { 00:23:45.383 "trtype": "TCP", 00:23:45.383 "adrfam": "IPv4", 00:23:45.383 "traddr": "10.0.0.2", 00:23:45.383 "trsvcid": "4420" 00:23:45.383 }, 00:23:45.383 "secure_channel": true 00:23:45.383 } 00:23:45.383 } 00:23:45.383 ] 00:23:45.383 } 00:23:45.383 ] 00:23:45.383 }' 00:23:45.383 00:48:56 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:45.643 00:48:57 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:45.643 "subsystems": [ 00:23:45.643 { 00:23:45.643 "subsystem": "keyring", 00:23:45.643 "config": [ 00:23:45.643 { 00:23:45.643 "method": "keyring_file_add_key", 00:23:45.643 "params": { 00:23:45.643 "name": "key0", 00:23:45.643 "path": "/tmp/tmp.3mTVh2ia4q" 00:23:45.643 } 00:23:45.643 } 00:23:45.643 ] 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "subsystem": "iobuf", 00:23:45.643 "config": [ 00:23:45.643 { 00:23:45.643 "method": "iobuf_set_options", 00:23:45.643 "params": { 00:23:45.643 "small_pool_count": 8192, 00:23:45.643 "large_pool_count": 1024, 00:23:45.643 "small_bufsize": 8192, 00:23:45.643 "large_bufsize": 135168 00:23:45.643 } 00:23:45.643 } 00:23:45.643 ] 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "subsystem": "sock", 00:23:45.643 "config": [ 00:23:45.643 { 00:23:45.643 "method": "sock_set_default_impl", 00:23:45.643 "params": { 00:23:45.643 "impl_name": "posix" 00:23:45.643 } 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "method": "sock_impl_set_options", 00:23:45.643 "params": { 00:23:45.643 "impl_name": "ssl", 00:23:45.643 "recv_buf_size": 4096, 00:23:45.643 "send_buf_size": 4096, 00:23:45.643 "enable_recv_pipe": true, 00:23:45.643 "enable_quickack": false, 00:23:45.643 "enable_placement_id": 0, 00:23:45.643 "enable_zerocopy_send_server": true, 00:23:45.643 "enable_zerocopy_send_client": false, 00:23:45.643 "zerocopy_threshold": 0, 00:23:45.643 "tls_version": 0, 00:23:45.643 "enable_ktls": false 00:23:45.643 } 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "method": "sock_impl_set_options", 00:23:45.643 "params": { 00:23:45.643 "impl_name": "posix", 00:23:45.643 "recv_buf_size": 2097152, 00:23:45.643 "send_buf_size": 2097152, 00:23:45.643 "enable_recv_pipe": true, 00:23:45.643 "enable_quickack": false, 00:23:45.643 "enable_placement_id": 0, 00:23:45.643 "enable_zerocopy_send_server": true, 00:23:45.643 "enable_zerocopy_send_client": false, 00:23:45.643 "zerocopy_threshold": 0, 00:23:45.643 "tls_version": 0, 00:23:45.643 "enable_ktls": false 00:23:45.643 } 00:23:45.643 } 00:23:45.643 ] 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "subsystem": "vmd", 00:23:45.643 "config": [] 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "subsystem": "accel", 00:23:45.643 "config": [ 00:23:45.643 { 00:23:45.643 "method": "accel_set_options", 00:23:45.643 "params": { 00:23:45.643 "small_cache_size": 128, 00:23:45.643 "large_cache_size": 16, 00:23:45.643 "task_count": 2048, 00:23:45.643 "sequence_count": 2048, 00:23:45.643 "buf_count": 2048 00:23:45.643 } 00:23:45.643 } 00:23:45.643 ] 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "subsystem": "bdev", 00:23:45.643 "config": [ 00:23:45.643 { 00:23:45.643 "method": "bdev_set_options", 00:23:45.643 "params": { 00:23:45.643 "bdev_io_pool_size": 65535, 00:23:45.643 "bdev_io_cache_size": 256, 00:23:45.643 "bdev_auto_examine": true, 00:23:45.643 "iobuf_small_cache_size": 128, 00:23:45.643 "iobuf_large_cache_size": 16 00:23:45.643 } 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "method": "bdev_raid_set_options", 00:23:45.643 "params": { 00:23:45.643 "process_window_size_kb": 1024 00:23:45.643 } 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "method": "bdev_iscsi_set_options", 00:23:45.643 "params": { 00:23:45.643 "timeout_sec": 30 00:23:45.643 } 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "method": "bdev_nvme_set_options", 00:23:45.643 "params": { 00:23:45.643 "action_on_timeout": "none", 00:23:45.643 "timeout_us": 0, 00:23:45.643 "timeout_admin_us": 0, 00:23:45.643 "keep_alive_timeout_ms": 10000, 00:23:45.643 "arbitration_burst": 0, 00:23:45.643 "low_priority_weight": 0, 00:23:45.643 "medium_priority_weight": 0, 00:23:45.643 "high_priority_weight": 0, 00:23:45.643 "nvme_adminq_poll_period_us": 10000, 00:23:45.643 "nvme_ioq_poll_period_us": 0, 00:23:45.643 "io_queue_requests": 512, 00:23:45.643 "delay_cmd_submit": true, 00:23:45.643 "transport_retry_count": 4, 00:23:45.643 "bdev_retry_count": 3, 00:23:45.643 "transport_ack_timeout": 0, 00:23:45.643 "ctrlr_loss_timeout_sec": 0, 00:23:45.643 "reconnect_delay_sec": 0, 00:23:45.643 "fast_io_fail_timeout_sec": 0, 00:23:45.643 "disable_auto_failback": false, 00:23:45.643 "generate_uuids": false, 00:23:45.643 "transport_tos": 0, 00:23:45.643 "nvme_error_stat": false, 00:23:45.643 "rdma_srq_size": 0, 00:23:45.643 "io_path_stat": false, 00:23:45.643 "allow_accel_sequence": false, 00:23:45.643 "rdma_max_cq_size": 0, 00:23:45.643 "rdma_cm_event_timeout_ms": 0, 00:23:45.643 "dhchap_digests": [ 00:23:45.643 "sha256", 00:23:45.643 "sha384", 00:23:45.643 "sha512" 00:23:45.643 ], 00:23:45.643 "dhchap_dhgroups": [ 00:23:45.643 "null", 00:23:45.643 "ffdhe2048", 00:23:45.643 "ffdhe3072", 00:23:45.643 "ffdhe4096", 00:23:45.643 "ffdhe6144", 00:23:45.643 "ffdhe8192" 00:23:45.643 ] 00:23:45.643 } 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "method": "bdev_nvme_attach_controller", 00:23:45.643 "params": { 00:23:45.643 "name": "nvme0", 00:23:45.643 "trtype": "TCP", 00:23:45.643 "adrfam": "IPv4", 00:23:45.643 "traddr": "10.0.0.2", 00:23:45.643 "trsvcid": "4420", 00:23:45.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.643 "prchk_reftag": false, 00:23:45.643 "prchk_guard": false, 00:23:45.643 "ctrlr_loss_timeout_sec": 0, 00:23:45.643 "reconnect_delay_sec": 0, 00:23:45.643 "fast_io_fail_timeout_sec": 0, 00:23:45.643 "psk": "key0", 00:23:45.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:45.643 "hdgst": false, 00:23:45.643 "ddgst": false 00:23:45.643 } 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "method": "bdev_nvme_set_hotplug", 00:23:45.643 "params": { 00:23:45.643 "period_us": 100000, 00:23:45.643 "enable": false 00:23:45.643 } 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "method": "bdev_enable_histogram", 00:23:45.643 "params": { 00:23:45.643 "name": "nvme0n1", 00:23:45.643 "enable": true 00:23:45.643 } 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "method": "bdev_wait_for_examine" 00:23:45.643 } 00:23:45.643 ] 00:23:45.643 }, 00:23:45.643 { 00:23:45.643 "subsystem": "nbd", 00:23:45.643 "config": [] 00:23:45.643 } 00:23:45.643 ] 00:23:45.643 }' 00:23:45.643 00:48:57 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1448860 00:23:45.643 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1448860 ']' 00:23:45.643 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1448860 00:23:45.643 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:45.643 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:45.644 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1448860 00:23:45.644 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:45.644 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:45.644 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1448860' 00:23:45.644 killing process with pid 1448860 00:23:45.644 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1448860 00:23:45.644 Received shutdown signal, test time was about 1.000000 seconds 00:23:45.644 00:23:45.644 Latency(us) 00:23:45.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.644 =================================================================================================================== 00:23:45.644 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:45.644 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1448860 00:23:45.902 00:48:57 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1448840 00:23:45.902 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1448840 ']' 00:23:45.902 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1448840 00:23:45.902 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:45.902 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:45.902 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1448840 00:23:45.902 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:45.902 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:45.902 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1448840' 00:23:45.902 killing process with pid 1448840 00:23:45.902 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1448840 00:23:45.902 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1448840 00:23:46.161 00:48:57 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:46.161 00:48:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.161 00:48:57 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:46.161 "subsystems": [ 00:23:46.161 { 00:23:46.161 "subsystem": "keyring", 00:23:46.161 "config": [ 00:23:46.161 { 00:23:46.161 "method": "keyring_file_add_key", 00:23:46.161 "params": { 00:23:46.161 "name": "key0", 00:23:46.161 "path": "/tmp/tmp.3mTVh2ia4q" 00:23:46.161 } 00:23:46.161 } 00:23:46.161 ] 00:23:46.161 }, 00:23:46.161 { 00:23:46.161 "subsystem": "iobuf", 00:23:46.161 "config": [ 00:23:46.161 { 00:23:46.161 "method": "iobuf_set_options", 00:23:46.161 "params": { 00:23:46.161 "small_pool_count": 8192, 00:23:46.161 "large_pool_count": 1024, 00:23:46.161 "small_bufsize": 8192, 00:23:46.161 "large_bufsize": 135168 00:23:46.161 } 00:23:46.161 } 00:23:46.161 ] 00:23:46.161 }, 00:23:46.161 { 00:23:46.161 "subsystem": "sock", 00:23:46.161 "config": [ 00:23:46.161 { 00:23:46.161 "method": "sock_set_default_impl", 00:23:46.161 "params": { 00:23:46.161 "impl_name": "posix" 00:23:46.161 } 00:23:46.161 }, 00:23:46.161 { 00:23:46.161 "method": "sock_impl_set_options", 00:23:46.161 "params": { 00:23:46.161 "impl_name": "ssl", 00:23:46.161 "recv_buf_size": 4096, 00:23:46.161 "send_buf_size": 4096, 00:23:46.161 "enable_recv_pipe": true, 00:23:46.161 "enable_quickack": false, 00:23:46.161 "enable_placement_id": 0, 00:23:46.161 "enable_zerocopy_send_server": true, 00:23:46.161 "enable_zerocopy_send_client": false, 00:23:46.161 "zerocopy_threshold": 0, 00:23:46.161 "tls_version": 0, 00:23:46.161 "enable_ktls": false 00:23:46.161 } 00:23:46.161 }, 00:23:46.161 { 00:23:46.161 "method": "sock_impl_set_options", 00:23:46.161 "params": { 00:23:46.161 "impl_name": "posix", 00:23:46.161 "recv_buf_size": 2097152, 00:23:46.161 "send_buf_size": 2097152, 00:23:46.161 "enable_recv_pipe": true, 00:23:46.161 "enable_quickack": false, 00:23:46.161 "enable_placement_id": 0, 00:23:46.161 "enable_zerocopy_send_server": true, 00:23:46.161 "enable_zerocopy_send_client": false, 00:23:46.161 "zerocopy_threshold": 0, 00:23:46.161 "tls_version": 0, 00:23:46.161 "enable_ktls": false 00:23:46.161 } 00:23:46.161 } 00:23:46.161 ] 00:23:46.161 }, 00:23:46.161 { 00:23:46.161 "subsystem": "vmd", 00:23:46.161 "config": [] 00:23:46.161 }, 00:23:46.161 { 00:23:46.161 "subsystem": "accel", 00:23:46.161 "config": [ 00:23:46.161 { 00:23:46.161 "method": "accel_set_options", 00:23:46.161 "params": { 00:23:46.161 "small_cache_size": 128, 00:23:46.161 "large_cache_size": 16, 00:23:46.161 "task_count": 2048, 00:23:46.161 "sequence_count": 2048, 00:23:46.161 "buf_count": 2048 00:23:46.161 } 00:23:46.161 } 00:23:46.161 ] 00:23:46.161 }, 00:23:46.161 { 00:23:46.161 "subsystem": "bdev", 00:23:46.161 "config": [ 00:23:46.161 { 00:23:46.161 "method": "bdev_set_options", 00:23:46.161 "params": { 00:23:46.161 "bdev_io_pool_size": 65535, 00:23:46.161 "bdev_io_cache_size": 256, 00:23:46.161 "bdev_auto_examine": true, 00:23:46.161 "iobuf_small_cache_size": 128, 00:23:46.162 "iobuf_large_cache_size": 16 00:23:46.162 } 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "method": "bdev_raid_set_options", 00:23:46.162 "params": { 00:23:46.162 "process_window_size_kb": 1024 00:23:46.162 } 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "method": "bdev_iscsi_set_options", 00:23:46.162 "params": { 00:23:46.162 "timeout_sec": 30 00:23:46.162 } 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "method": "bdev_nvme_set_options", 00:23:46.162 "params": { 00:23:46.162 "action_on_timeout": "none", 00:23:46.162 "timeout_us": 0, 00:23:46.162 "timeout_admin_us": 0, 00:23:46.162 "keep_alive_timeout_ms": 10000, 00:23:46.162 "arbitration_burst": 0, 00:23:46.162 "low_priority_weight": 0, 00:23:46.162 "medium_priority_weight": 0, 00:23:46.162 "high_priority_weight": 0, 00:23:46.162 "nvme_adminq_poll_period_us": 10000, 00:23:46.162 "nvme_ioq_poll_period_us": 0, 00:23:46.162 "io_queue_requests": 0, 00:23:46.162 "delay_cmd_submit": true, 00:23:46.162 "transport_retry_count": 4, 00:23:46.162 "bdev_retry_count": 3, 00:23:46.162 "transport_ack_timeout": 0, 00:23:46.162 "ctrlr_loss_timeout_sec": 0, 00:23:46.162 "reconnect_delay_sec": 0, 00:23:46.162 "fast_io_fail_timeout_sec": 0, 00:23:46.162 "disable_auto_failback": false, 00:23:46.162 "generate_uuids": false, 00:23:46.162 "transport_tos": 0, 00:23:46.162 "nvme_error_stat": false, 00:23:46.162 "rdma_srq_size": 0, 00:23:46.162 "io_path_stat": false, 00:23:46.162 "allow_accel_sequence": false, 00:23:46.162 "rdma_max_cq_size": 0, 00:23:46.162 "rdma_cm_event_timeout_ms": 0, 00:23:46.162 "dhchap_digests": [ 00:23:46.162 "sha256", 00:23:46.162 "sha384", 00:23:46.162 "sha512" 00:23:46.162 ], 00:23:46.162 "dhchap_dhgroups": [ 00:23:46.162 "null", 00:23:46.162 "ffdhe2048", 00:23:46.162 "ffdhe3072", 00:23:46.162 "ffdhe4096", 00:23:46.162 "ffdhe6144", 00:23:46.162 "ffdhe8192" 00:23:46.162 ] 00:23:46.162 } 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "method": "bdev_nvme_set_hotplug", 00:23:46.162 "params": { 00:23:46.162 "period_us": 100000, 00:23:46.162 "enable": false 00:23:46.162 } 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "method": "bdev_malloc_create", 00:23:46.162 "params": { 00:23:46.162 "name": "malloc0", 00:23:46.162 "num_blocks": 8192, 00:23:46.162 "block_size": 4096, 00:23:46.162 "physical_block_size": 4096, 00:23:46.162 "uuid": "80aac057-34cf-4123-835b-99fa143e530a", 00:23:46.162 "optimal_io_boundary": 0 00:23:46.162 } 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "method": "bdev_wait_for_examine" 00:23:46.162 } 00:23:46.162 ] 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "subsystem": "nbd", 00:23:46.162 "config": [] 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "subsystem": "scheduler", 00:23:46.162 "config": [ 00:23:46.162 { 00:23:46.162 "method": "framework_set_scheduler", 00:23:46.162 "params": { 00:23:46.162 "name": "static" 00:23:46.162 } 00:23:46.162 } 00:23:46.162 ] 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "subsystem": "nvmf", 00:23:46.162 "config": [ 00:23:46.162 { 00:23:46.162 "method": "nvmf_set_config", 00:23:46.162 "params": { 00:23:46.162 "discovery_filter": "match_any", 00:23:46.162 "admin_cmd_passthru": { 00:23:46.162 "identify_ctrlr": false 00:23:46.162 } 00:23:46.162 } 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "method": "nvmf_set_max_subsystems", 00:23:46.162 "params": { 00:23:46.162 "max_subsystems": 1024 00:23:46.162 } 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "method": "nvmf_set_crdt", 00:23:46.162 "params": { 00:23:46.162 "crdt1": 0, 00:23:46.162 "crdt2": 0, 00:23:46.162 "crdt3": 0 00:23:46.162 } 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "method": "nvmf_create_transport", 00:23:46.162 "params": { 00:23:46.162 "trtype": "TCP", 00:23:46.162 "max_queue_depth": 128, 00:23:46.162 "max_io_qpairs_per_ctrlr": 127, 00:23:46.162 "in_capsule_data_size": 4096, 00:23:46.162 "max_io_size": 131072, 00:23:46.162 "io_unit_size": 131072, 00:23:46.162 "max_aq_depth": 128, 00:23:46.162 "num_shared_buffers": 511, 00:23:46.162 "buf_cache_size": 4294967295, 00:23:46.162 "dif_insert_or_strip": false, 00:23:46.162 "zcopy": false, 00:23:46.162 "c2h_success": false, 00:23:46.162 "sock_priority": 0, 00:23:46.162 "abort_timeout_sec": 1, 00:23:46.162 "ack_timeout": 0, 00:23:46.162 "data_wr_pool_size": 0 00:23:46.162 } 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "method": "nvmf_create_subsystem", 00:23:46.162 "params": { 00:23:46.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.162 "allow_any_host": false, 00:23:46.162 "serial_number": "00000000000000000000", 00:23:46.162 "model_number": "SPDK bdev Controller", 00:23:46.162 "max_namespaces": 32, 00:23:46.162 "min_cntlid": 1, 00:23:46.162 "max_cntlid": 65519, 00:23:46.162 "ana_reporting": false 00:23:46.162 } 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "method": "nvmf_subsystem_add_host", 00:23:46.162 "params": { 00:23:46.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.162 "host": "nqn.2016-06.io.spdk:host1", 00:23:46.162 "psk": "key0" 00:23:46.162 } 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "method": "nvmf_subsystem_add_ns", 00:23:46.162 "params": { 00:23:46.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.162 "namespace": { 00:23:46.162 "nsid": 1, 00:23:46.162 "bdev_name": "malloc0", 00:23:46.162 "nguid": "80AAC05734CF4123835B99FA143E530A", 00:23:46.162 "uuid": "80aac057-34cf-4123-835b-99fa143e530a", 00:23:46.162 "no_auto_visible": false 00:23:46.162 } 00:23:46.162 } 00:23:46.162 }, 00:23:46.162 { 00:23:46.162 "method": "nvmf_subsystem_add_listener", 00:23:46.162 "params": { 00:23:46.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.162 "listen_address": { 00:23:46.162 "trtype": "TCP", 00:23:46.162 "adrfam": "IPv4", 00:23:46.162 "traddr": "10.0.0.2", 00:23:46.162 "trsvcid": "4420" 00:23:46.162 }, 00:23:46.162 "secure_channel": true 00:23:46.162 } 00:23:46.162 } 00:23:46.162 ] 00:23:46.162 } 00:23:46.162 ] 00:23:46.162 }' 00:23:46.162 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:46.162 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.162 00:48:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1449332 00:23:46.162 00:48:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:46.162 00:48:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1449332 00:23:46.162 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1449332 ']' 00:23:46.162 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.162 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.162 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.162 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.162 00:48:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.163 [2024-07-13 00:48:57.542748] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:46.163 [2024-07-13 00:48:57.542795] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.163 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.163 [2024-07-13 00:48:57.612137] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.163 [2024-07-13 00:48:57.649662] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.163 [2024-07-13 00:48:57.649702] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.163 [2024-07-13 00:48:57.649709] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.163 [2024-07-13 00:48:57.649714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.163 [2024-07-13 00:48:57.649719] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.163 [2024-07-13 00:48:57.649789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.422 [2024-07-13 00:48:57.855802] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.422 [2024-07-13 00:48:57.887842] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:46.422 [2024-07-13 00:48:57.899525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.989 00:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:46.989 00:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:46.989 00:48:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:46.989 00:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:46.989 00:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.989 00:48:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.989 00:48:58 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1449579 00:23:46.989 00:48:58 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1449579 /var/tmp/bdevperf.sock 00:23:46.989 00:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1449579 ']' 00:23:46.989 00:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.989 00:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.989 00:48:58 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:46.989 00:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.989 00:48:58 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:46.989 "subsystems": [ 00:23:46.989 { 00:23:46.989 "subsystem": "keyring", 00:23:46.989 "config": [ 00:23:46.989 { 00:23:46.989 "method": "keyring_file_add_key", 00:23:46.989 "params": { 00:23:46.989 "name": "key0", 00:23:46.989 "path": "/tmp/tmp.3mTVh2ia4q" 00:23:46.989 } 00:23:46.989 } 00:23:46.989 ] 00:23:46.989 }, 00:23:46.989 { 00:23:46.989 "subsystem": "iobuf", 00:23:46.989 "config": [ 00:23:46.989 { 00:23:46.989 "method": "iobuf_set_options", 00:23:46.989 "params": { 00:23:46.989 "small_pool_count": 8192, 00:23:46.989 "large_pool_count": 1024, 00:23:46.989 "small_bufsize": 8192, 00:23:46.989 "large_bufsize": 135168 00:23:46.989 } 00:23:46.989 } 00:23:46.989 ] 00:23:46.989 }, 00:23:46.989 { 00:23:46.989 "subsystem": "sock", 00:23:46.989 "config": [ 00:23:46.989 { 00:23:46.989 "method": "sock_set_default_impl", 00:23:46.989 "params": { 00:23:46.989 "impl_name": "posix" 00:23:46.989 } 00:23:46.989 }, 00:23:46.989 { 00:23:46.989 "method": "sock_impl_set_options", 00:23:46.989 "params": { 00:23:46.989 "impl_name": "ssl", 00:23:46.989 "recv_buf_size": 4096, 00:23:46.989 "send_buf_size": 4096, 00:23:46.989 "enable_recv_pipe": true, 00:23:46.989 "enable_quickack": false, 00:23:46.989 "enable_placement_id": 0, 00:23:46.989 "enable_zerocopy_send_server": true, 00:23:46.989 "enable_zerocopy_send_client": false, 00:23:46.989 "zerocopy_threshold": 0, 00:23:46.989 "tls_version": 0, 00:23:46.989 "enable_ktls": false 00:23:46.989 } 00:23:46.989 }, 00:23:46.989 { 00:23:46.989 "method": "sock_impl_set_options", 00:23:46.989 "params": { 00:23:46.989 "impl_name": "posix", 00:23:46.989 "recv_buf_size": 2097152, 00:23:46.989 "send_buf_size": 2097152, 00:23:46.989 "enable_recv_pipe": true, 00:23:46.989 "enable_quickack": false, 00:23:46.989 "enable_placement_id": 0, 00:23:46.989 "enable_zerocopy_send_server": true, 00:23:46.989 "enable_zerocopy_send_client": false, 00:23:46.989 "zerocopy_threshold": 0, 00:23:46.989 "tls_version": 0, 00:23:46.989 "enable_ktls": false 00:23:46.989 } 00:23:46.989 } 00:23:46.989 ] 00:23:46.989 }, 00:23:46.989 { 00:23:46.989 "subsystem": "vmd", 00:23:46.989 "config": [] 00:23:46.989 }, 00:23:46.989 { 00:23:46.989 "subsystem": "accel", 00:23:46.989 "config": [ 00:23:46.989 { 00:23:46.989 "method": "accel_set_options", 00:23:46.989 "params": { 00:23:46.989 "small_cache_size": 128, 00:23:46.989 "large_cache_size": 16, 00:23:46.989 "task_count": 2048, 00:23:46.989 "sequence_count": 2048, 00:23:46.989 "buf_count": 2048 00:23:46.989 } 00:23:46.989 } 00:23:46.989 ] 00:23:46.989 }, 00:23:46.989 { 00:23:46.989 "subsystem": "bdev", 00:23:46.989 "config": [ 00:23:46.989 { 00:23:46.989 "method": "bdev_set_options", 00:23:46.989 "params": { 00:23:46.989 "bdev_io_pool_size": 65535, 00:23:46.989 "bdev_io_cache_size": 256, 00:23:46.989 "bdev_auto_examine": true, 00:23:46.989 "iobuf_small_cache_size": 128, 00:23:46.989 "iobuf_large_cache_size": 16 00:23:46.989 } 00:23:46.989 }, 00:23:46.989 { 00:23:46.989 "method": "bdev_raid_set_options", 00:23:46.989 "params": { 00:23:46.989 "process_window_size_kb": 1024 00:23:46.989 } 00:23:46.989 }, 00:23:46.989 { 00:23:46.989 "method": "bdev_iscsi_set_options", 00:23:46.989 "params": { 00:23:46.989 "timeout_sec": 30 00:23:46.989 } 00:23:46.989 }, 00:23:46.989 { 00:23:46.989 "method": "bdev_nvme_set_options", 00:23:46.989 "params": { 00:23:46.989 "action_on_timeout": "none", 00:23:46.989 "timeout_us": 0, 00:23:46.989 "timeout_admin_us": 0, 00:23:46.989 "keep_alive_timeout_ms": 10000, 00:23:46.989 "arbitration_burst": 0, 00:23:46.989 "low_priority_weight": 0, 00:23:46.989 "medium_priority_weight": 0, 00:23:46.989 "high_priority_weight": 0, 00:23:46.989 "nvme_adminq_poll_period_us": 10000, 00:23:46.989 "nvme_ioq_poll_period_us": 0, 00:23:46.989 "io_queue_requests": 512, 00:23:46.989 "delay_cmd_submit": true, 00:23:46.989 "transport_retry_count": 4, 00:23:46.989 "bdev_retry_count": 3, 00:23:46.989 "transport_ack_timeout": 0, 00:23:46.989 "ctrlr_loss_timeout_sec": 0, 00:23:46.989 "reconnect_delay_sec": 0, 00:23:46.989 "fast_io_fail_timeout_sec": 0, 00:23:46.989 "disable_auto_failback": false, 00:23:46.989 "generate_uuids": false, 00:23:46.989 "transport_tos": 0, 00:23:46.989 "nvme_error_stat": false, 00:23:46.989 "rdma_srq_size": 0, 00:23:46.989 "io_path_stat": false, 00:23:46.989 "allow_accel_sequence": false, 00:23:46.989 "rdma_max_cq_size": 0, 00:23:46.989 "rdma_cm_event_timeout_ms": 0, 00:23:46.989 "dhchap_digests": [ 00:23:46.989 "sha256", 00:23:46.989 "sha384", 00:23:46.989 "sha512" 00:23:46.989 ], 00:23:46.989 "dhchap_dhgroups": [ 00:23:46.989 "null", 00:23:46.989 "ffdhe2048", 00:23:46.989 "ffdhe3072", 00:23:46.989 "ffdhe4096", 00:23:46.989 "ffdhe6144", 00:23:46.989 "ffdhe8192" 00:23:46.989 ] 00:23:46.989 } 00:23:46.989 }, 00:23:46.989 { 00:23:46.989 "method": "bdev_nvme_attach_controller", 00:23:46.989 "params": { 00:23:46.989 "name": "nvme0", 00:23:46.989 "trtype": "TCP", 00:23:46.989 "adrfam": "IPv4", 00:23:46.989 "traddr": "10.0.0.2", 00:23:46.989 "trsvcid": "4420", 00:23:46.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.990 "prchk_reftag": false, 00:23:46.990 "prchk_guard": false, 00:23:46.990 "ctrlr_loss_timeout_sec": 0, 00:23:46.990 "reconnect_delay_sec": 0, 00:23:46.990 "fast_io_fail_timeout_sec": 0, 00:23:46.990 "psk": "key0", 00:23:46.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:46.990 "hdgst": false, 00:23:46.990 "ddgst": false 00:23:46.990 } 00:23:46.990 }, 00:23:46.990 { 00:23:46.990 "method": "bdev_nvme_set_hotplug", 00:23:46.990 "params": { 00:23:46.990 "period_us": 100000, 00:23:46.990 "enable": false 00:23:46.990 } 00:23:46.990 }, 00:23:46.990 { 00:23:46.990 "method": "bdev_enable_histogram", 00:23:46.990 "params": { 00:23:46.990 "name": "nvme0n1", 00:23:46.990 "enable": true 00:23:46.990 } 00:23:46.990 }, 00:23:46.990 { 00:23:46.990 "method": "bdev_wait_for_examine" 00:23:46.990 } 00:23:46.990 ] 00:23:46.990 }, 00:23:46.990 { 00:23:46.990 "subsystem": "nbd", 00:23:46.990 "config": [] 00:23:46.990 } 00:23:46.990 ] 00:23:46.990 }' 00:23:46.990 00:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.990 00:48:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.990 [2024-07-13 00:48:58.427695] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:46.990 [2024-07-13 00:48:58.427743] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449579 ] 00:23:46.990 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.990 [2024-07-13 00:48:58.492528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.990 [2024-07-13 00:48:58.533542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.247 [2024-07-13 00:48:58.679462] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:47.813 00:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.813 00:48:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:47.813 00:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:47.813 00:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:48.071 00:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.071 00:48:59 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:48.071 Running I/O for 1 seconds... 00:23:49.004 00:23:49.004 Latency(us) 00:23:49.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.004 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:49.004 Verification LBA range: start 0x0 length 0x2000 00:23:49.004 nvme0n1 : 1.01 5438.45 21.24 0.00 0.00 23371.57 5100.41 22225.25 00:23:49.005 =================================================================================================================== 00:23:49.005 Total : 5438.45 21.24 0.00 0.00 23371.57 5100.41 22225.25 00:23:49.005 0 00:23:49.005 00:49:00 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:49.005 00:49:00 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:49.005 00:49:00 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:49.005 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:23:49.005 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:23:49.005 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:49.005 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:49.005 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:49.005 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:49.005 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:49.005 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:49.005 nvmf_trace.0 00:23:49.262 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:23:49.262 00:49:00 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1449579 00:23:49.262 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1449579 ']' 00:23:49.262 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1449579 00:23:49.262 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:49.262 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.262 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1449579 00:23:49.262 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:49.262 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:49.262 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1449579' 00:23:49.262 killing process with pid 1449579 00:23:49.262 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1449579 00:23:49.262 Received shutdown signal, test time was about 1.000000 seconds 00:23:49.262 00:23:49.262 Latency(us) 00:23:49.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.262 =================================================================================================================== 00:23:49.262 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.262 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1449579 00:23:49.521 00:49:00 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.522 rmmod nvme_tcp 00:23:49.522 rmmod nvme_fabrics 00:23:49.522 rmmod nvme_keyring 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1449332 ']' 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1449332 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1449332 ']' 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1449332 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1449332 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1449332' 00:23:49.522 killing process with pid 1449332 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1449332 00:23:49.522 00:49:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1449332 00:23:49.782 00:49:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:49.782 00:49:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:49.782 00:49:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:49.782 00:49:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:49.782 00:49:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:49.782 00:49:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.782 00:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.782 00:49:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.684 00:49:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:51.684 00:49:03 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.1nRh3goj4V /tmp/tmp.9tkZ0SsZ5g /tmp/tmp.3mTVh2ia4q 00:23:51.684 00:23:51.684 real 1m17.493s 00:23:51.684 user 1m55.766s 00:23:51.684 sys 0m29.941s 00:23:51.684 00:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:51.684 00:49:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.684 ************************************ 00:23:51.684 END TEST nvmf_tls 00:23:51.684 ************************************ 00:23:51.942 00:49:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:51.942 00:49:03 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:51.942 00:49:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:51.942 00:49:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:51.942 00:49:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:51.942 ************************************ 00:23:51.942 START TEST nvmf_fips 00:23:51.942 ************************************ 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:51.942 * Looking for test storage... 00:23:51.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:51.942 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:51.943 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:52.202 Error setting digest 00:23:52.202 0022B5B9CF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:52.202 0022B5B9CF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:52.202 00:49:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:57.530 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:57.530 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:57.530 Found net devices under 0000:86:00.0: cvl_0_0 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.530 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:57.531 Found net devices under 0000:86:00.1: cvl_0_1 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:57.531 00:49:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:57.531 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.531 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:57.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:23:57.790 00:23:57.790 --- 10.0.0.2 ping statistics --- 00:23:57.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.790 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:23:57.790 00:23:57.790 --- 10.0.0.1 ping statistics --- 00:23:57.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.790 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1453379 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1453379 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1453379 ']' 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.790 00:49:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:57.790 [2024-07-13 00:49:09.341458] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:57.790 [2024-07-13 00:49:09.341508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.049 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.049 [2024-07-13 00:49:09.414758] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.049 [2024-07-13 00:49:09.455588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.049 [2024-07-13 00:49:09.455623] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.049 [2024-07-13 00:49:09.455634] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.049 [2024-07-13 00:49:09.455640] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.049 [2024-07-13 00:49:09.455645] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.049 [2024-07-13 00:49:09.455661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.614 00:49:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.614 00:49:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:58.614 00:49:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:58.614 00:49:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.614 00:49:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:58.614 00:49:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.614 00:49:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:58.614 00:49:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:58.614 00:49:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:58.614 00:49:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:58.614 00:49:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:58.614 00:49:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:58.614 00:49:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:58.614 00:49:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:58.872 [2024-07-13 00:49:10.326557] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.872 [2024-07-13 00:49:10.342542] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.872 [2024-07-13 00:49:10.342703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.872 [2024-07-13 00:49:10.370685] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:58.872 malloc0 00:23:58.872 00:49:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:58.872 00:49:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1453632 00:23:58.872 00:49:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:58.872 00:49:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1453632 /var/tmp/bdevperf.sock 00:23:58.872 00:49:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1453632 ']' 00:23:58.872 00:49:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.872 00:49:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.872 00:49:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.872 00:49:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.872 00:49:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:59.130 [2024-07-13 00:49:10.460692] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:59.130 [2024-07-13 00:49:10.460737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1453632 ] 00:23:59.130 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.130 [2024-07-13 00:49:10.529537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.130 [2024-07-13 00:49:10.571019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.696 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.696 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:59.696 00:49:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:59.954 [2024-07-13 00:49:11.407797] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:59.954 [2024-07-13 00:49:11.407867] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:59.954 TLSTESTn1 00:23:59.954 00:49:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:00.213 Running I/O for 10 seconds... 00:24:10.184 00:24:10.184 Latency(us) 00:24:10.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.184 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:10.184 Verification LBA range: start 0x0 length 0x2000 00:24:10.184 TLSTESTn1 : 10.02 5583.23 21.81 0.00 0.00 22885.91 4900.95 58811.44 00:24:10.184 =================================================================================================================== 00:24:10.184 Total : 5583.23 21.81 0.00 0.00 22885.91 4900.95 58811.44 00:24:10.184 0 00:24:10.184 00:49:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:10.184 00:49:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:10.184 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:24:10.184 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:24:10.184 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:10.184 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:10.184 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:10.184 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:10.184 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:10.184 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:10.184 nvmf_trace.0 00:24:10.184 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:24:10.184 00:49:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1453632 00:24:10.184 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1453632 ']' 00:24:10.185 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1453632 00:24:10.185 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:10.185 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:10.185 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1453632 00:24:10.442 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:10.442 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:10.442 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1453632' 00:24:10.442 killing process with pid 1453632 00:24:10.442 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1453632 00:24:10.442 Received shutdown signal, test time was about 10.000000 seconds 00:24:10.442 00:24:10.442 Latency(us) 00:24:10.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.442 =================================================================================================================== 00:24:10.442 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.442 [2024-07-13 00:49:21.773896] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:10.442 00:49:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1453632 00:24:10.442 00:49:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:10.442 00:49:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:10.442 00:49:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:10.442 00:49:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:10.442 00:49:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:10.442 00:49:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:10.442 00:49:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:10.442 rmmod nvme_tcp 00:24:10.442 rmmod nvme_fabrics 00:24:10.442 rmmod nvme_keyring 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1453379 ']' 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1453379 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1453379 ']' 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1453379 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1453379 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1453379' 00:24:10.700 killing process with pid 1453379 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1453379 00:24:10.700 [2024-07-13 00:49:22.057685] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1453379 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.700 00:49:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:13.231 00:24:13.231 real 0m21.021s 00:24:13.231 user 0m23.158s 00:24:13.231 sys 0m8.708s 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:13.231 ************************************ 00:24:13.231 END TEST nvmf_fips 00:24:13.231 ************************************ 00:24:13.231 00:49:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:13.231 00:49:24 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:13.231 00:49:24 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:13.231 00:49:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:13.231 00:49:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:13.231 00:49:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:13.231 ************************************ 00:24:13.231 START TEST nvmf_fuzz 00:24:13.231 ************************************ 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:13.231 * Looking for test storage... 00:24:13.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.231 00:49:24 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:13.232 00:49:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:18.498 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:18.498 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:18.498 Found net devices under 0000:86:00.0: cvl_0_0 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:18.498 Found net devices under 0000:86:00.1: cvl_0_1 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:18.498 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.499 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.499 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.499 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:18.499 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.499 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.499 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:18.499 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.499 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.499 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:18.499 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:18.499 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.499 00:49:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:18.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:24:18.758 00:24:18.758 --- 10.0.0.2 ping statistics --- 00:24:18.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.758 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:24:18.758 00:24:18.758 --- 10.0.0.1 ping statistics --- 00:24:18.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.758 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1458979 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1458979 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1458979 ']' 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.758 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:19.017 Malloc0 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:19.017 00:49:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:51.092 Fuzzing completed. Shutting down the fuzz application 00:24:51.092 00:24:51.092 Dumping successful admin opcodes: 00:24:51.092 8, 9, 10, 24, 00:24:51.092 Dumping successful io opcodes: 00:24:51.092 0, 9, 00:24:51.092 NS: 0x200003aeff00 I/O qp, Total commands completed: 995771, total successful commands: 5830, random_seed: 1303245568 00:24:51.092 NS: 0x200003aeff00 admin qp, Total commands completed: 132769, total successful commands: 1076, random_seed: 1802907200 00:24:51.092 00:50:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:51.092 Fuzzing completed. Shutting down the fuzz application 00:24:51.092 00:24:51.092 Dumping successful admin opcodes: 00:24:51.092 24, 00:24:51.092 Dumping successful io opcodes: 00:24:51.092 00:24:51.092 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1189074078 00:24:51.092 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1189152722 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:51.092 rmmod nvme_tcp 00:24:51.092 rmmod nvme_fabrics 00:24:51.092 rmmod nvme_keyring 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1458979 ']' 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1458979 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1458979 ']' 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1458979 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1458979 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1458979' 00:24:51.092 killing process with pid 1458979 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1458979 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1458979 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.092 00:50:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.998 00:50:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:52.998 00:50:04 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:52.998 00:24:52.998 real 0m40.095s 00:24:52.998 user 0m53.884s 00:24:52.998 sys 0m15.072s 00:24:52.998 00:50:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:52.998 00:50:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:52.998 ************************************ 00:24:52.998 END TEST nvmf_fuzz 00:24:52.998 ************************************ 00:24:52.998 00:50:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:52.998 00:50:04 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:52.998 00:50:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:52.998 00:50:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:52.998 00:50:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:52.998 ************************************ 00:24:52.998 START TEST nvmf_multiconnection 00:24:52.998 ************************************ 00:24:52.998 00:50:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:53.258 * Looking for test storage... 00:24:53.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:53.258 00:50:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:58.594 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:58.855 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:58.855 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:58.855 Found net devices under 0000:86:00.0: cvl_0_0 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:58.855 Found net devices under 0000:86:00.1: cvl_0_1 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:58.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:24:58.855 00:24:58.855 --- 10.0.0.2 ping statistics --- 00:24:58.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.855 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:24:58.855 00:24:58.855 --- 10.0.0.1 ping statistics --- 00:24:58.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.855 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:58.855 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:59.115 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:59.115 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:59.115 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:59.115 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.115 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1467544 00:24:59.115 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1467544 00:24:59.115 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:59.115 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1467544 ']' 00:24:59.115 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.115 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:59.115 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.115 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:59.115 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.115 [2024-07-13 00:50:10.495864] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:59.115 [2024-07-13 00:50:10.495905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.115 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.115 [2024-07-13 00:50:10.566334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:59.115 [2024-07-13 00:50:10.608855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.115 [2024-07-13 00:50:10.608893] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.115 [2024-07-13 00:50:10.608900] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.115 [2024-07-13 00:50:10.608907] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.115 [2024-07-13 00:50:10.608912] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.115 [2024-07-13 00:50:10.608962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.115 [2024-07-13 00:50:10.609074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.115 [2024-07-13 00:50:10.609100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.115 [2024-07-13 00:50:10.609101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.374 [2024-07-13 00:50:10.749350] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.374 Malloc1 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.374 [2024-07-13 00:50:10.805123] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.374 Malloc2 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.374 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.374 Malloc3 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.375 Malloc4 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.375 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.634 Malloc5 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.634 00:50:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.634 Malloc6 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.634 Malloc7 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.634 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.635 Malloc8 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.635 Malloc9 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.635 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.894 Malloc10 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.894 Malloc11 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.894 00:50:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:01.267 00:50:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:01.267 00:50:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:01.267 00:50:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.267 00:50:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:01.267 00:50:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:03.168 00:50:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:03.168 00:50:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:03.168 00:50:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:03.168 00:50:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:03.168 00:50:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.168 00:50:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:03.168 00:50:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.168 00:50:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:04.100 00:50:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:04.100 00:50:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:04.100 00:50:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.100 00:50:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:04.100 00:50:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:05.999 00:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:05.999 00:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:05.999 00:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:06.256 00:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:06.256 00:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.256 00:50:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:06.256 00:50:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.256 00:50:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:07.630 00:50:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:07.630 00:50:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.630 00:50:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:07.630 00:50:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:07.630 00:50:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:09.534 00:50:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:09.534 00:50:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:09.534 00:50:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:09.534 00:50:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:09.534 00:50:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:09.534 00:50:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:09.534 00:50:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.534 00:50:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:10.468 00:50:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:10.468 00:50:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:10.468 00:50:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:10.468 00:50:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:10.468 00:50:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:12.998 00:50:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:12.998 00:50:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:12.998 00:50:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:12.998 00:50:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:12.998 00:50:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:12.998 00:50:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:12.998 00:50:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.999 00:50:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:13.934 00:50:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:13.934 00:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:13.934 00:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:13.934 00:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:13.934 00:50:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:15.835 00:50:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:15.835 00:50:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:15.835 00:50:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:15.835 00:50:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:15.835 00:50:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:15.835 00:50:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:15.835 00:50:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.835 00:50:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:17.210 00:50:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:17.210 00:50:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:17.210 00:50:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:17.210 00:50:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:17.210 00:50:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:19.114 00:50:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:19.114 00:50:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:19.114 00:50:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:19.114 00:50:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:19.114 00:50:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:19.114 00:50:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:19.114 00:50:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.114 00:50:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:20.049 00:50:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:20.050 00:50:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:20.050 00:50:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:20.050 00:50:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:20.050 00:50:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:22.582 00:50:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:22.582 00:50:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:22.582 00:50:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:22.582 00:50:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:22.582 00:50:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:22.582 00:50:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:22.582 00:50:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.583 00:50:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:23.610 00:50:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:23.610 00:50:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:23.610 00:50:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.610 00:50:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:23.610 00:50:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:25.514 00:50:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:25.514 00:50:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:25.514 00:50:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:25.514 00:50:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:25.514 00:50:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.514 00:50:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:25.514 00:50:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.514 00:50:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:27.415 00:50:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:27.415 00:50:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:27.415 00:50:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:27.415 00:50:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:27.415 00:50:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:29.317 00:50:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:29.317 00:50:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:29.317 00:50:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:29.317 00:50:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:29.317 00:50:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:29.317 00:50:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:29.317 00:50:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.317 00:50:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:30.694 00:50:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:30.694 00:50:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:30.694 00:50:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:30.694 00:50:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:30.694 00:50:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:32.599 00:50:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:32.599 00:50:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:32.599 00:50:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:32.599 00:50:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:32.599 00:50:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:32.599 00:50:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:32.599 00:50:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.599 00:50:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:33.974 00:50:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:33.974 00:50:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:33.974 00:50:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:33.974 00:50:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:33.974 00:50:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:35.877 00:50:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:35.877 00:50:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:35.877 00:50:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:35.877 00:50:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:35.877 00:50:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:35.877 00:50:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:35.877 00:50:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:35.877 [global] 00:25:35.877 thread=1 00:25:35.877 invalidate=1 00:25:35.877 rw=read 00:25:35.877 time_based=1 00:25:35.877 runtime=10 00:25:35.877 ioengine=libaio 00:25:35.877 direct=1 00:25:35.877 bs=262144 00:25:35.877 iodepth=64 00:25:35.877 norandommap=1 00:25:35.877 numjobs=1 00:25:35.877 00:25:35.877 [job0] 00:25:35.877 filename=/dev/nvme0n1 00:25:35.877 [job1] 00:25:35.877 filename=/dev/nvme10n1 00:25:35.877 [job2] 00:25:35.877 filename=/dev/nvme1n1 00:25:35.877 [job3] 00:25:35.877 filename=/dev/nvme2n1 00:25:35.877 [job4] 00:25:35.877 filename=/dev/nvme3n1 00:25:35.877 [job5] 00:25:35.877 filename=/dev/nvme4n1 00:25:35.877 [job6] 00:25:35.877 filename=/dev/nvme5n1 00:25:35.877 [job7] 00:25:35.877 filename=/dev/nvme6n1 00:25:35.877 [job8] 00:25:35.877 filename=/dev/nvme7n1 00:25:35.877 [job9] 00:25:35.877 filename=/dev/nvme8n1 00:25:35.877 [job10] 00:25:35.877 filename=/dev/nvme9n1 00:25:36.134 Could not set queue depth (nvme0n1) 00:25:36.134 Could not set queue depth (nvme10n1) 00:25:36.134 Could not set queue depth (nvme1n1) 00:25:36.134 Could not set queue depth (nvme2n1) 00:25:36.134 Could not set queue depth (nvme3n1) 00:25:36.134 Could not set queue depth (nvme4n1) 00:25:36.134 Could not set queue depth (nvme5n1) 00:25:36.134 Could not set queue depth (nvme6n1) 00:25:36.134 Could not set queue depth (nvme7n1) 00:25:36.134 Could not set queue depth (nvme8n1) 00:25:36.134 Could not set queue depth (nvme9n1) 00:25:36.392 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:36.392 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:36.392 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:36.392 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:36.392 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:36.392 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:36.392 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:36.392 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:36.392 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:36.392 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:36.392 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:36.392 fio-3.35 00:25:36.392 Starting 11 threads 00:25:48.602 00:25:48.602 job0: (groupid=0, jobs=1): err= 0: pid=1473973: Sat Jul 13 00:50:58 2024 00:25:48.602 read: IOPS=1097, BW=274MiB/s (288MB/s)(2764MiB/10071msec) 00:25:48.602 slat (usec): min=8, max=66704, avg=645.42, stdev=2645.33 00:25:48.602 clat (usec): min=972, max=212152, avg=57605.20, stdev=41989.63 00:25:48.602 lat (usec): min=998, max=229383, avg=58250.62, stdev=42375.76 00:25:48.602 clat percentiles (msec): 00:25:48.602 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 23], 20.00th=[ 28], 00:25:48.602 | 30.00th=[ 30], 40.00th=[ 33], 50.00th=[ 41], 60.00th=[ 54], 00:25:48.602 | 70.00th=[ 69], 80.00th=[ 92], 90.00th=[ 121], 95.00th=[ 153], 00:25:48.602 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 203], 99.95th=[ 209], 00:25:48.602 | 99.99th=[ 213] 00:25:48.602 bw ( KiB/s): min=96256, max=543232, per=12.20%, avg=281334.10, stdev=148812.14, samples=20 00:25:48.602 iops : min= 376, max= 2122, avg=1098.95, stdev=581.31, samples=20 00:25:48.602 lat (usec) : 1000=0.01% 00:25:48.602 lat (msec) : 2=0.24%, 4=1.38%, 10=3.08%, 20=4.62%, 50=48.16% 00:25:48.602 lat (msec) : 100=26.14%, 250=16.37% 00:25:48.603 cpu : usr=0.29%, sys=3.69%, ctx=2216, majf=0, minf=3347 00:25:48.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:25:48.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.603 issued rwts: total=11054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.603 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.603 job1: (groupid=0, jobs=1): err= 0: pid=1473975: Sat Jul 13 00:50:58 2024 00:25:48.603 read: IOPS=753, BW=188MiB/s (197MB/s)(1900MiB/10086msec) 00:25:48.603 slat (usec): min=10, max=155121, avg=816.79, stdev=3929.67 00:25:48.603 clat (usec): min=711, max=212217, avg=84044.40, stdev=47911.11 00:25:48.603 lat (usec): min=737, max=212243, avg=84861.19, stdev=48512.33 00:25:48.603 clat percentiles (msec): 00:25:48.603 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 15], 20.00th=[ 30], 00:25:48.603 | 30.00th=[ 52], 40.00th=[ 73], 50.00th=[ 93], 60.00th=[ 104], 00:25:48.603 | 70.00th=[ 114], 80.00th=[ 127], 90.00th=[ 146], 95.00th=[ 161], 00:25:48.603 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 194], 99.95th=[ 197], 00:25:48.603 | 99.99th=[ 213] 00:25:48.603 bw ( KiB/s): min=112640, max=346624, per=8.36%, avg=192861.50, stdev=83633.42, samples=20 00:25:48.603 iops : min= 440, max= 1354, avg=753.35, stdev=326.66, samples=20 00:25:48.603 lat (usec) : 750=0.04%, 1000=0.03% 00:25:48.603 lat (msec) : 2=0.30%, 4=1.32%, 10=4.03%, 20=7.62%, 50=15.87% 00:25:48.603 lat (msec) : 100=27.74%, 250=43.05% 00:25:48.603 cpu : usr=0.25%, sys=2.70%, ctx=1720, majf=0, minf=4097 00:25:48.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:48.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.603 issued rwts: total=7598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.603 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.603 job2: (groupid=0, jobs=1): err= 0: pid=1473976: Sat Jul 13 00:50:58 2024 00:25:48.603 read: IOPS=670, BW=168MiB/s (176MB/s)(1678MiB/10017msec) 00:25:48.603 slat (usec): min=10, max=85373, avg=1144.25, stdev=4613.51 00:25:48.603 clat (msec): min=3, max=239, avg=94.27, stdev=46.63 00:25:48.603 lat (msec): min=3, max=239, avg=95.41, stdev=47.44 00:25:48.603 clat percentiles (msec): 00:25:48.603 | 1.00th=[ 14], 5.00th=[ 26], 10.00th=[ 31], 20.00th=[ 48], 00:25:48.603 | 30.00th=[ 62], 40.00th=[ 75], 50.00th=[ 94], 60.00th=[ 112], 00:25:48.603 | 70.00th=[ 127], 80.00th=[ 142], 90.00th=[ 157], 95.00th=[ 169], 00:25:48.603 | 99.00th=[ 186], 99.50th=[ 188], 99.90th=[ 213], 99.95th=[ 228], 00:25:48.603 | 99.99th=[ 241] 00:25:48.603 bw ( KiB/s): min=98816, max=363008, per=7.38%, avg=170230.00, stdev=73485.88, samples=20 00:25:48.603 iops : min= 386, max= 1418, avg=664.95, stdev=287.07, samples=20 00:25:48.603 lat (msec) : 4=0.03%, 10=0.64%, 20=1.85%, 50=19.34%, 100=31.89% 00:25:48.603 lat (msec) : 250=46.25% 00:25:48.603 cpu : usr=0.32%, sys=2.42%, ctx=1553, majf=0, minf=4097 00:25:48.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:48.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.603 issued rwts: total=6713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.603 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.603 job3: (groupid=0, jobs=1): err= 0: pid=1473977: Sat Jul 13 00:50:58 2024 00:25:48.603 read: IOPS=608, BW=152MiB/s (160MB/s)(1533MiB/10073msec) 00:25:48.603 slat (usec): min=11, max=97119, avg=1151.61, stdev=4623.38 00:25:48.603 clat (msec): min=3, max=223, avg=103.85, stdev=43.04 00:25:48.603 lat (msec): min=3, max=243, avg=105.00, stdev=43.66 00:25:48.603 clat percentiles (msec): 00:25:48.603 | 1.00th=[ 9], 5.00th=[ 27], 10.00th=[ 45], 20.00th=[ 65], 00:25:48.603 | 30.00th=[ 81], 40.00th=[ 96], 50.00th=[ 106], 60.00th=[ 118], 00:25:48.603 | 70.00th=[ 133], 80.00th=[ 144], 90.00th=[ 159], 95.00th=[ 167], 00:25:48.603 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 209], 99.95th=[ 218], 00:25:48.603 | 99.99th=[ 224] 00:25:48.603 bw ( KiB/s): min=95744, max=245248, per=6.74%, avg=155381.90, stdev=39602.81, samples=20 00:25:48.603 iops : min= 374, max= 958, avg=606.95, stdev=154.71, samples=20 00:25:48.603 lat (msec) : 4=0.05%, 10=1.11%, 20=1.88%, 50=9.15%, 100=32.45% 00:25:48.603 lat (msec) : 250=55.37% 00:25:48.603 cpu : usr=0.26%, sys=2.42%, ctx=1434, majf=0, minf=4097 00:25:48.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:48.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.603 issued rwts: total=6133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.603 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.603 job4: (groupid=0, jobs=1): err= 0: pid=1473978: Sat Jul 13 00:50:58 2024 00:25:48.603 read: IOPS=791, BW=198MiB/s (208MB/s)(1982MiB/10012msec) 00:25:48.603 slat (usec): min=8, max=81065, avg=1140.67, stdev=3748.37 00:25:48.603 clat (usec): min=1319, max=205068, avg=79621.52, stdev=47135.67 00:25:48.603 lat (usec): min=1356, max=206108, avg=80762.19, stdev=47879.20 00:25:48.603 clat percentiles (msec): 00:25:48.603 | 1.00th=[ 15], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 36], 00:25:48.603 | 30.00th=[ 45], 40.00th=[ 53], 50.00th=[ 62], 60.00th=[ 84], 00:25:48.603 | 70.00th=[ 108], 80.00th=[ 129], 90.00th=[ 155], 95.00th=[ 167], 00:25:48.603 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 201], 99.95th=[ 203], 00:25:48.603 | 99.99th=[ 205] 00:25:48.603 bw ( KiB/s): min=96256, max=468992, per=8.73%, avg=201301.00, stdev=110048.64, samples=20 00:25:48.603 iops : min= 376, max= 1832, avg=786.30, stdev=429.89, samples=20 00:25:48.603 lat (msec) : 2=0.06%, 4=0.15%, 10=0.19%, 20=1.77%, 50=34.01% 00:25:48.603 lat (msec) : 100=30.22%, 250=33.60% 00:25:48.603 cpu : usr=0.28%, sys=3.10%, ctx=1612, majf=0, minf=4097 00:25:48.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:48.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.603 issued rwts: total=7926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.603 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.603 job5: (groupid=0, jobs=1): err= 0: pid=1473979: Sat Jul 13 00:50:58 2024 00:25:48.603 read: IOPS=853, BW=213MiB/s (224MB/s)(2152MiB/10087msec) 00:25:48.603 slat (usec): min=10, max=107217, avg=734.95, stdev=3912.50 00:25:48.603 clat (usec): min=669, max=245759, avg=74177.45, stdev=52646.93 00:25:48.603 lat (usec): min=700, max=271450, avg=74912.40, stdev=53210.34 00:25:48.603 clat percentiles (usec): 00:25:48.603 | 1.00th=[ 1680], 5.00th=[ 3982], 10.00th=[ 7832], 20.00th=[ 17695], 00:25:48.603 | 30.00th=[ 34866], 40.00th=[ 53740], 50.00th=[ 68682], 60.00th=[ 83362], 00:25:48.603 | 70.00th=[104334], 80.00th=[128451], 90.00th=[152044], 95.00th=[164627], 00:25:48.603 | 99.00th=[187696], 99.50th=[191890], 99.90th=[198181], 99.95th=[198181], 00:25:48.603 | 99.99th=[246416] 00:25:48.603 bw ( KiB/s): min=116736, max=420864, per=9.49%, avg=218752.00, stdev=76915.94, samples=20 00:25:48.603 iops : min= 456, max= 1644, avg=854.50, stdev=300.45, samples=20 00:25:48.603 lat (usec) : 750=0.01%, 1000=0.16% 00:25:48.603 lat (msec) : 2=1.15%, 4=3.69%, 10=6.86%, 20=9.12%, 50=17.12% 00:25:48.603 lat (msec) : 100=29.74%, 250=32.14% 00:25:48.603 cpu : usr=0.35%, sys=2.84%, ctx=2075, majf=0, minf=4097 00:25:48.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:48.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.603 issued rwts: total=8609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.603 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.603 job6: (groupid=0, jobs=1): err= 0: pid=1473980: Sat Jul 13 00:50:58 2024 00:25:48.604 read: IOPS=949, BW=237MiB/s (249MB/s)(2395MiB/10089msec) 00:25:48.604 slat (usec): min=7, max=112523, avg=647.37, stdev=3318.75 00:25:48.604 clat (usec): min=717, max=226753, avg=66693.30, stdev=48268.10 00:25:48.604 lat (usec): min=772, max=229434, avg=67340.67, stdev=48705.63 00:25:48.604 clat percentiles (msec): 00:25:48.604 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 20], 20.00th=[ 27], 00:25:48.604 | 30.00th=[ 32], 40.00th=[ 43], 50.00th=[ 53], 60.00th=[ 64], 00:25:48.604 | 70.00th=[ 81], 80.00th=[ 110], 90.00th=[ 150], 95.00th=[ 167], 00:25:48.604 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 211], 99.95th=[ 215], 00:25:48.604 | 99.99th=[ 228] 00:25:48.604 bw ( KiB/s): min=101376, max=504320, per=10.56%, avg=243547.00, stdev=110292.64, samples=20 00:25:48.604 iops : min= 396, max= 1970, avg=951.35, stdev=430.84, samples=20 00:25:48.604 lat (usec) : 750=0.01%, 1000=0.15% 00:25:48.604 lat (msec) : 2=0.74%, 4=1.18%, 10=3.42%, 20=4.84%, 50=37.34% 00:25:48.604 lat (msec) : 100=29.60%, 250=22.72% 00:25:48.604 cpu : usr=0.38%, sys=2.96%, ctx=2161, majf=0, minf=4097 00:25:48.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:48.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.604 issued rwts: total=9578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.604 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.604 job7: (groupid=0, jobs=1): err= 0: pid=1473981: Sat Jul 13 00:50:58 2024 00:25:48.604 read: IOPS=804, BW=201MiB/s (211MB/s)(2027MiB/10072msec) 00:25:48.604 slat (usec): min=10, max=134753, avg=877.17, stdev=3985.58 00:25:48.604 clat (usec): min=877, max=220553, avg=78556.24, stdev=51339.85 00:25:48.604 lat (usec): min=915, max=301774, avg=79433.42, stdev=52023.09 00:25:48.604 clat percentiles (msec): 00:25:48.604 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 21], 20.00th=[ 27], 00:25:48.604 | 30.00th=[ 32], 40.00th=[ 51], 50.00th=[ 68], 60.00th=[ 103], 00:25:48.604 | 70.00th=[ 117], 80.00th=[ 132], 90.00th=[ 148], 95.00th=[ 161], 00:25:48.604 | 99.00th=[ 180], 99.50th=[ 192], 99.90th=[ 209], 99.95th=[ 220], 00:25:48.604 | 99.99th=[ 222] 00:25:48.604 bw ( KiB/s): min=110592, max=553472, per=8.93%, avg=205869.85, stdev=114663.18, samples=20 00:25:48.604 iops : min= 432, max= 2162, avg=804.15, stdev=447.88, samples=20 00:25:48.604 lat (usec) : 1000=0.06% 00:25:48.604 lat (msec) : 2=0.44%, 4=0.31%, 10=3.44%, 20=5.72%, 50=29.81% 00:25:48.604 lat (msec) : 100=18.90%, 250=41.32% 00:25:48.604 cpu : usr=0.29%, sys=2.75%, ctx=1776, majf=0, minf=4097 00:25:48.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:48.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.604 issued rwts: total=8106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.604 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.604 job8: (groupid=0, jobs=1): err= 0: pid=1473982: Sat Jul 13 00:50:58 2024 00:25:48.604 read: IOPS=857, BW=214MiB/s (225MB/s)(2159MiB/10074msec) 00:25:48.604 slat (usec): min=10, max=79396, avg=892.09, stdev=3256.77 00:25:48.604 clat (usec): min=1144, max=185082, avg=73694.06, stdev=39102.41 00:25:48.604 lat (usec): min=1188, max=185133, avg=74586.15, stdev=39563.44 00:25:48.604 clat percentiles (msec): 00:25:48.604 | 1.00th=[ 8], 5.00th=[ 15], 10.00th=[ 26], 20.00th=[ 37], 00:25:48.604 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 68], 60.00th=[ 85], 00:25:48.604 | 70.00th=[ 101], 80.00th=[ 113], 90.00th=[ 127], 95.00th=[ 140], 00:25:48.604 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 174], 99.95th=[ 178], 00:25:48.604 | 99.99th=[ 186] 00:25:48.604 bw ( KiB/s): min=135168, max=409804, per=9.52%, avg=219427.80, stdev=81739.01, samples=20 00:25:48.604 iops : min= 528, max= 1600, avg=857.10, stdev=319.20, samples=20 00:25:48.604 lat (msec) : 2=0.06%, 4=0.42%, 10=2.30%, 20=3.90%, 50=26.84% 00:25:48.604 lat (msec) : 100=36.34%, 250=30.14% 00:25:48.604 cpu : usr=0.37%, sys=3.14%, ctx=1922, majf=0, minf=4097 00:25:48.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:48.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.604 issued rwts: total=8636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.604 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.604 job9: (groupid=0, jobs=1): err= 0: pid=1473983: Sat Jul 13 00:50:58 2024 00:25:48.604 read: IOPS=874, BW=219MiB/s (229MB/s)(2206MiB/10088msec) 00:25:48.604 slat (usec): min=9, max=68665, avg=894.23, stdev=3406.21 00:25:48.604 clat (msec): min=2, max=225, avg=72.20, stdev=45.49 00:25:48.604 lat (msec): min=2, max=231, avg=73.09, stdev=46.05 00:25:48.604 clat percentiles (msec): 00:25:48.604 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 24], 20.00th=[ 28], 00:25:48.604 | 30.00th=[ 36], 40.00th=[ 51], 50.00th=[ 63], 60.00th=[ 77], 00:25:48.604 | 70.00th=[ 94], 80.00th=[ 115], 90.00th=[ 140], 95.00th=[ 161], 00:25:48.604 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 199], 99.95th=[ 205], 00:25:48.604 | 99.99th=[ 226] 00:25:48.604 bw ( KiB/s): min=98304, max=580608, per=9.73%, avg=224217.40, stdev=120800.94, samples=20 00:25:48.604 iops : min= 384, max= 2268, avg=875.80, stdev=471.89, samples=20 00:25:48.604 lat (msec) : 4=0.20%, 10=0.96%, 20=3.67%, 50=34.60%, 100=33.87% 00:25:48.604 lat (msec) : 250=26.69% 00:25:48.604 cpu : usr=0.33%, sys=3.07%, ctx=1899, majf=0, minf=4097 00:25:48.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:48.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.604 issued rwts: total=8822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.604 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.604 job10: (groupid=0, jobs=1): err= 0: pid=1473984: Sat Jul 13 00:50:58 2024 00:25:48.604 read: IOPS=762, BW=191MiB/s (200MB/s)(1924MiB/10090msec) 00:25:48.604 slat (usec): min=8, max=58748, avg=802.09, stdev=3336.80 00:25:48.604 clat (usec): min=1196, max=205864, avg=83003.32, stdev=41464.36 00:25:48.604 lat (usec): min=1253, max=210559, avg=83805.41, stdev=41973.39 00:25:48.604 clat percentiles (msec): 00:25:48.604 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 28], 20.00th=[ 48], 00:25:48.604 | 30.00th=[ 60], 40.00th=[ 69], 50.00th=[ 80], 60.00th=[ 92], 00:25:48.604 | 70.00th=[ 107], 80.00th=[ 117], 90.00th=[ 142], 95.00th=[ 155], 00:25:48.604 | 99.00th=[ 186], 99.50th=[ 199], 99.90th=[ 205], 99.95th=[ 207], 00:25:48.604 | 99.99th=[ 207] 00:25:48.604 bw ( KiB/s): min=96256, max=326656, per=8.48%, avg=195405.10, stdev=63104.27, samples=20 00:25:48.604 iops : min= 376, max= 1276, avg=763.30, stdev=246.50, samples=20 00:25:48.604 lat (msec) : 2=0.40%, 4=0.42%, 10=2.52%, 20=3.13%, 50=14.34% 00:25:48.604 lat (msec) : 100=44.91%, 250=34.27% 00:25:48.604 cpu : usr=0.31%, sys=2.57%, ctx=1940, majf=0, minf=4097 00:25:48.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:48.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:48.604 issued rwts: total=7697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.604 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:48.604 00:25:48.604 Run status group 0 (all jobs): 00:25:48.604 READ: bw=2252MiB/s (2361MB/s), 152MiB/s-274MiB/s (160MB/s-288MB/s), io=22.2GiB (23.8GB), run=10012-10090msec 00:25:48.604 00:25:48.604 Disk stats (read/write): 00:25:48.605 nvme0n1: ios=21920/0, merge=0/0, ticks=1240007/0, in_queue=1240007, util=97.30% 00:25:48.605 nvme10n1: ios=15021/0, merge=0/0, ticks=1238799/0, in_queue=1238799, util=97.49% 00:25:48.605 nvme1n1: ios=13044/0, merge=0/0, ticks=1241042/0, in_queue=1241042, util=97.76% 00:25:48.605 nvme2n1: ios=12083/0, merge=0/0, ticks=1237642/0, in_queue=1237642, util=97.89% 00:25:48.605 nvme3n1: ios=15462/0, merge=0/0, ticks=1236765/0, in_queue=1236765, util=97.96% 00:25:48.605 nvme4n1: ios=17038/0, merge=0/0, ticks=1239536/0, in_queue=1239536, util=98.30% 00:25:48.605 nvme5n1: ios=18960/0, merge=0/0, ticks=1241180/0, in_queue=1241180, util=98.45% 00:25:48.605 nvme6n1: ios=16013/0, merge=0/0, ticks=1236211/0, in_queue=1236211, util=98.59% 00:25:48.605 nvme7n1: ios=17076/0, merge=0/0, ticks=1237583/0, in_queue=1237583, util=98.97% 00:25:48.605 nvme8n1: ios=17431/0, merge=0/0, ticks=1235150/0, in_queue=1235150, util=99.10% 00:25:48.605 nvme9n1: ios=15172/0, merge=0/0, ticks=1239048/0, in_queue=1239048, util=99.25% 00:25:48.605 00:50:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:48.605 [global] 00:25:48.605 thread=1 00:25:48.605 invalidate=1 00:25:48.605 rw=randwrite 00:25:48.605 time_based=1 00:25:48.605 runtime=10 00:25:48.605 ioengine=libaio 00:25:48.605 direct=1 00:25:48.605 bs=262144 00:25:48.605 iodepth=64 00:25:48.605 norandommap=1 00:25:48.605 numjobs=1 00:25:48.605 00:25:48.605 [job0] 00:25:48.605 filename=/dev/nvme0n1 00:25:48.605 [job1] 00:25:48.605 filename=/dev/nvme10n1 00:25:48.605 [job2] 00:25:48.605 filename=/dev/nvme1n1 00:25:48.605 [job3] 00:25:48.605 filename=/dev/nvme2n1 00:25:48.605 [job4] 00:25:48.605 filename=/dev/nvme3n1 00:25:48.605 [job5] 00:25:48.605 filename=/dev/nvme4n1 00:25:48.605 [job6] 00:25:48.605 filename=/dev/nvme5n1 00:25:48.605 [job7] 00:25:48.605 filename=/dev/nvme6n1 00:25:48.605 [job8] 00:25:48.605 filename=/dev/nvme7n1 00:25:48.605 [job9] 00:25:48.605 filename=/dev/nvme8n1 00:25:48.605 [job10] 00:25:48.605 filename=/dev/nvme9n1 00:25:48.605 Could not set queue depth (nvme0n1) 00:25:48.605 Could not set queue depth (nvme10n1) 00:25:48.605 Could not set queue depth (nvme1n1) 00:25:48.605 Could not set queue depth (nvme2n1) 00:25:48.605 Could not set queue depth (nvme3n1) 00:25:48.605 Could not set queue depth (nvme4n1) 00:25:48.605 Could not set queue depth (nvme5n1) 00:25:48.605 Could not set queue depth (nvme6n1) 00:25:48.605 Could not set queue depth (nvme7n1) 00:25:48.605 Could not set queue depth (nvme8n1) 00:25:48.605 Could not set queue depth (nvme9n1) 00:25:48.605 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.605 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.605 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.605 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.605 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.605 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.605 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.605 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.605 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.605 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.605 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:48.605 fio-3.35 00:25:48.605 Starting 11 threads 00:25:58.584 00:25:58.584 job0: (groupid=0, jobs=1): err= 0: pid=1475517: Sat Jul 13 00:51:09 2024 00:25:58.584 write: IOPS=605, BW=151MiB/s (159MB/s)(1539MiB/10165msec); 0 zone resets 00:25:58.584 slat (usec): min=28, max=29641, avg=1435.73, stdev=2976.18 00:25:58.584 clat (msec): min=3, max=294, avg=104.15, stdev=35.26 00:25:58.584 lat (msec): min=3, max=294, avg=105.59, stdev=35.73 00:25:58.584 clat percentiles (msec): 00:25:58.584 | 1.00th=[ 19], 5.00th=[ 47], 10.00th=[ 67], 20.00th=[ 78], 00:25:58.584 | 30.00th=[ 82], 40.00th=[ 100], 50.00th=[ 105], 60.00th=[ 107], 00:25:58.584 | 70.00th=[ 117], 80.00th=[ 133], 90.00th=[ 153], 95.00th=[ 163], 00:25:58.584 | 99.00th=[ 190], 99.50th=[ 207], 99.90th=[ 284], 99.95th=[ 284], 00:25:58.584 | 99.99th=[ 296] 00:25:58.584 bw ( KiB/s): min=105984, max=251904, per=9.24%, avg=156006.40, stdev=44282.62, samples=20 00:25:58.584 iops : min= 414, max= 984, avg=609.40, stdev=172.98, samples=20 00:25:58.584 lat (msec) : 4=0.02%, 10=0.24%, 20=0.80%, 50=4.66%, 100=36.62% 00:25:58.584 lat (msec) : 250=57.48%, 500=0.18% 00:25:58.584 cpu : usr=1.69%, sys=1.88%, ctx=2395, majf=0, minf=1 00:25:58.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:58.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:58.584 issued rwts: total=0,6157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.584 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:58.584 job1: (groupid=0, jobs=1): err= 0: pid=1475529: Sat Jul 13 00:51:09 2024 00:25:58.584 write: IOPS=660, BW=165MiB/s (173MB/s)(1682MiB/10178msec); 0 zone resets 00:25:58.584 slat (usec): min=22, max=29589, avg=1407.23, stdev=2700.05 00:25:58.584 clat (msec): min=3, max=350, avg=95.39, stdev=35.53 00:25:58.584 lat (msec): min=5, max=350, avg=96.80, stdev=35.93 00:25:58.584 clat percentiles (msec): 00:25:58.584 | 1.00th=[ 24], 5.00th=[ 67], 10.00th=[ 71], 20.00th=[ 73], 00:25:58.584 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 86], 00:25:58.584 | 70.00th=[ 106], 80.00th=[ 127], 90.00th=[ 140], 95.00th=[ 161], 00:25:58.584 | 99.00th=[ 203], 99.50th=[ 268], 99.90th=[ 330], 99.95th=[ 338], 00:25:58.584 | 99.99th=[ 351] 00:25:58.584 bw ( KiB/s): min=98304, max=223232, per=10.10%, avg=170572.80, stdev=43958.70, samples=20 00:25:58.584 iops : min= 384, max= 872, avg=666.30, stdev=171.71, samples=20 00:25:58.584 lat (msec) : 4=0.01%, 10=0.10%, 20=0.62%, 50=1.50%, 100=64.03% 00:25:58.584 lat (msec) : 250=33.16%, 500=0.56% 00:25:58.584 cpu : usr=1.56%, sys=1.60%, ctx=2060, majf=0, minf=1 00:25:58.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:58.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:58.584 issued rwts: total=0,6727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.584 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:58.584 job2: (groupid=0, jobs=1): err= 0: pid=1475530: Sat Jul 13 00:51:09 2024 00:25:58.584 write: IOPS=547, BW=137MiB/s (143MB/s)(1377MiB/10067msec); 0 zone resets 00:25:58.584 slat (usec): min=27, max=83176, avg=1537.21, stdev=3793.06 00:25:58.584 clat (usec): min=1293, max=255963, avg=115301.52, stdev=57134.25 00:25:58.584 lat (usec): min=1689, max=256030, avg=116838.72, stdev=57999.63 00:25:58.584 clat percentiles (msec): 00:25:58.584 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 37], 20.00th=[ 72], 00:25:58.584 | 30.00th=[ 85], 40.00th=[ 100], 50.00th=[ 106], 60.00th=[ 124], 00:25:58.584 | 70.00th=[ 148], 80.00th=[ 169], 90.00th=[ 190], 95.00th=[ 220], 00:25:58.584 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 255], 99.95th=[ 257], 00:25:58.584 | 99.99th=[ 257] 00:25:58.584 bw ( KiB/s): min=69632, max=274944, per=8.25%, avg=139340.80, stdev=57082.42, samples=20 00:25:58.584 iops : min= 272, max= 1074, avg=544.30, stdev=222.98, samples=20 00:25:58.584 lat (msec) : 2=0.05%, 4=0.40%, 10=2.07%, 20=2.49%, 50=9.19% 00:25:58.584 lat (msec) : 100=26.18%, 250=58.96%, 500=0.65% 00:25:58.584 cpu : usr=1.33%, sys=1.75%, ctx=2520, majf=0, minf=1 00:25:58.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:58.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:58.584 issued rwts: total=0,5507,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.584 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:58.584 job3: (groupid=0, jobs=1): err= 0: pid=1475531: Sat Jul 13 00:51:09 2024 00:25:58.584 write: IOPS=628, BW=157MiB/s (165MB/s)(1580MiB/10050msec); 0 zone resets 00:25:58.584 slat (usec): min=19, max=48826, avg=1352.37, stdev=3334.46 00:25:58.584 clat (msec): min=2, max=220, avg=100.39, stdev=57.10 00:25:58.584 lat (msec): min=2, max=220, avg=101.74, stdev=57.95 00:25:58.584 clat percentiles (msec): 00:25:58.584 | 1.00th=[ 12], 5.00th=[ 31], 10.00th=[ 39], 20.00th=[ 42], 00:25:58.584 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 87], 60.00th=[ 105], 00:25:58.584 | 70.00th=[ 131], 80.00th=[ 165], 90.00th=[ 194], 95.00th=[ 203], 00:25:58.584 | 99.00th=[ 213], 99.50th=[ 218], 99.90th=[ 220], 99.95th=[ 220], 00:25:58.584 | 99.99th=[ 220] 00:25:58.584 bw ( KiB/s): min=77824, max=358912, per=9.49%, avg=160179.20, stdev=82604.51, samples=20 00:25:58.584 iops : min= 304, max= 1402, avg=625.70, stdev=322.67, samples=20 00:25:58.584 lat (msec) : 4=0.09%, 10=0.79%, 20=1.69%, 50=22.29%, 100=32.90% 00:25:58.584 lat (msec) : 250=42.23% 00:25:58.584 cpu : usr=1.52%, sys=1.93%, ctx=2630, majf=0, minf=1 00:25:58.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:58.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:58.584 issued rwts: total=0,6320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.584 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:58.584 job4: (groupid=0, jobs=1): err= 0: pid=1475532: Sat Jul 13 00:51:09 2024 00:25:58.584 write: IOPS=515, BW=129MiB/s (135MB/s)(1311MiB/10178msec); 0 zone resets 00:25:58.584 slat (usec): min=25, max=70613, avg=1483.25, stdev=4099.57 00:25:58.584 clat (usec): min=1420, max=309219, avg=122629.58, stdev=57273.24 00:25:58.584 lat (usec): min=1472, max=309261, avg=124112.83, stdev=58063.64 00:25:58.584 clat percentiles (msec): 00:25:58.584 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 52], 20.00th=[ 74], 00:25:58.584 | 30.00th=[ 90], 40.00th=[ 102], 50.00th=[ 114], 60.00th=[ 138], 00:25:58.584 | 70.00th=[ 157], 80.00th=[ 178], 90.00th=[ 203], 95.00th=[ 218], 00:25:58.584 | 99.00th=[ 236], 99.50th=[ 275], 99.90th=[ 305], 99.95th=[ 309], 00:25:58.584 | 99.99th=[ 309] 00:25:58.584 bw ( KiB/s): min=75776, max=220160, per=7.85%, avg=132633.60, stdev=43889.35, samples=20 00:25:58.584 iops : min= 296, max= 860, avg=518.10, stdev=171.44, samples=20 00:25:58.584 lat (msec) : 2=0.06%, 4=0.44%, 10=0.88%, 20=2.31%, 50=5.93% 00:25:58.584 lat (msec) : 100=29.53%, 250=60.13%, 500=0.72% 00:25:58.584 cpu : usr=1.07%, sys=1.60%, ctx=2668, majf=0, minf=1 00:25:58.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:58.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:58.584 issued rwts: total=0,5245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.584 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:58.584 job5: (groupid=0, jobs=1): err= 0: pid=1475533: Sat Jul 13 00:51:09 2024 00:25:58.584 write: IOPS=545, BW=136MiB/s (143MB/s)(1372MiB/10050msec); 0 zone resets 00:25:58.584 slat (usec): min=25, max=91852, avg=1552.26, stdev=4291.49 00:25:58.584 clat (usec): min=1451, max=282130, avg=115634.63, stdev=66385.03 00:25:58.584 lat (usec): min=1500, max=282199, avg=117186.89, stdev=67355.80 00:25:58.584 clat percentiles (msec): 00:25:58.584 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 32], 20.00th=[ 46], 00:25:58.584 | 30.00th=[ 74], 40.00th=[ 94], 50.00th=[ 111], 60.00th=[ 136], 00:25:58.584 | 70.00th=[ 157], 80.00th=[ 176], 90.00th=[ 207], 95.00th=[ 224], 00:25:58.584 | 99.00th=[ 268], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 279], 00:25:58.584 | 99.99th=[ 284] 00:25:58.584 bw ( KiB/s): min=61440, max=282624, per=8.22%, avg=138854.40, stdev=62403.58, samples=20 00:25:58.584 iops : min= 240, max= 1104, avg=542.40, stdev=243.76, samples=20 00:25:58.584 lat (msec) : 2=0.05%, 4=0.77%, 10=3.23%, 20=3.99%, 50=13.07% 00:25:58.584 lat (msec) : 100=23.31%, 250=53.40%, 500=2.19% 00:25:58.584 cpu : usr=1.18%, sys=1.71%, ctx=2579, majf=0, minf=1 00:25:58.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:58.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:58.584 issued rwts: total=0,5487,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.584 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:58.584 job6: (groupid=0, jobs=1): err= 0: pid=1475534: Sat Jul 13 00:51:09 2024 00:25:58.584 write: IOPS=580, BW=145MiB/s (152MB/s)(1477MiB/10177msec); 0 zone resets 00:25:58.584 slat (usec): min=26, max=63040, avg=1326.33, stdev=3564.85 00:25:58.584 clat (usec): min=1439, max=347521, avg=108855.34, stdev=62031.97 00:25:58.584 lat (usec): min=1637, max=347580, avg=110181.67, stdev=62933.28 00:25:58.584 clat percentiles (msec): 00:25:58.584 | 1.00th=[ 10], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 47], 00:25:58.584 | 30.00th=[ 70], 40.00th=[ 81], 50.00th=[ 100], 60.00th=[ 114], 00:25:58.584 | 70.00th=[ 155], 80.00th=[ 171], 90.00th=[ 197], 95.00th=[ 213], 00:25:58.584 | 99.00th=[ 241], 99.50th=[ 275], 99.90th=[ 338], 99.95th=[ 338], 00:25:58.584 | 99.99th=[ 347] 00:25:58.584 bw ( KiB/s): min=73728, max=251904, per=8.86%, avg=149632.00, stdev=63072.19, samples=20 00:25:58.584 iops : min= 288, max= 984, avg=584.50, stdev=246.38, samples=20 00:25:58.584 lat (msec) : 2=0.08%, 4=0.27%, 10=0.68%, 20=3.13%, 50=16.84% 00:25:58.584 lat (msec) : 100=30.11%, 250=48.25%, 500=0.64% 00:25:58.584 cpu : usr=1.42%, sys=1.60%, ctx=3030, majf=0, minf=1 00:25:58.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:58.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:58.584 issued rwts: total=0,5909,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.584 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:58.584 job7: (groupid=0, jobs=1): err= 0: pid=1475535: Sat Jul 13 00:51:09 2024 00:25:58.584 write: IOPS=610, BW=153MiB/s (160MB/s)(1555MiB/10181msec); 0 zone resets 00:25:58.584 slat (usec): min=27, max=69040, avg=1442.33, stdev=3214.27 00:25:58.584 clat (msec): min=2, max=351, avg=103.27, stdev=37.70 00:25:58.584 lat (msec): min=3, max=351, avg=104.72, stdev=38.10 00:25:58.584 clat percentiles (msec): 00:25:58.584 | 1.00th=[ 22], 5.00th=[ 54], 10.00th=[ 68], 20.00th=[ 73], 00:25:58.584 | 30.00th=[ 78], 40.00th=[ 96], 50.00th=[ 104], 60.00th=[ 106], 00:25:58.584 | 70.00th=[ 114], 80.00th=[ 131], 90.00th=[ 155], 95.00th=[ 167], 00:25:58.584 | 99.00th=[ 197], 99.50th=[ 271], 99.90th=[ 330], 99.95th=[ 342], 00:25:58.584 | 99.99th=[ 351] 00:25:58.584 bw ( KiB/s): min=100352, max=223232, per=9.33%, avg=157581.60, stdev=38448.23, samples=20 00:25:58.584 iops : min= 392, max= 872, avg=615.55, stdev=150.19, samples=20 00:25:58.584 lat (msec) : 4=0.05%, 10=0.13%, 20=0.69%, 50=3.47%, 100=42.08% 00:25:58.584 lat (msec) : 250=52.97%, 500=0.61% 00:25:58.584 cpu : usr=1.89%, sys=1.97%, ctx=2220, majf=0, minf=1 00:25:58.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:58.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:58.584 issued rwts: total=0,6219,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.584 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:58.584 job8: (groupid=0, jobs=1): err= 0: pid=1475538: Sat Jul 13 00:51:09 2024 00:25:58.584 write: IOPS=529, BW=132MiB/s (139MB/s)(1347MiB/10173msec); 0 zone resets 00:25:58.584 slat (usec): min=24, max=47693, avg=1598.28, stdev=3745.77 00:25:58.584 clat (msec): min=3, max=351, avg=119.20, stdev=57.08 00:25:58.584 lat (msec): min=3, max=351, avg=120.80, stdev=57.89 00:25:58.584 clat percentiles (msec): 00:25:58.584 | 1.00th=[ 22], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 56], 00:25:58.584 | 30.00th=[ 81], 40.00th=[ 105], 50.00th=[ 122], 60.00th=[ 138], 00:25:58.584 | 70.00th=[ 155], 80.00th=[ 176], 90.00th=[ 197], 95.00th=[ 203], 00:25:58.584 | 99.00th=[ 220], 99.50th=[ 279], 99.90th=[ 342], 99.95th=[ 342], 00:25:58.584 | 99.99th=[ 351] 00:25:58.584 bw ( KiB/s): min=79872, max=357579, per=8.07%, avg=136330.15, stdev=67520.10, samples=20 00:25:58.584 iops : min= 312, max= 1396, avg=532.50, stdev=263.61, samples=20 00:25:58.584 lat (msec) : 4=0.04%, 10=0.20%, 20=0.65%, 50=17.17%, 100=18.64% 00:25:58.584 lat (msec) : 250=62.60%, 500=0.71% 00:25:58.584 cpu : usr=1.18%, sys=1.59%, ctx=2180, majf=0, minf=1 00:25:58.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:58.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:58.585 issued rwts: total=0,5387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:58.585 job9: (groupid=0, jobs=1): err= 0: pid=1475540: Sat Jul 13 00:51:09 2024 00:25:58.585 write: IOPS=830, BW=208MiB/s (218MB/s)(2084MiB/10036msec); 0 zone resets 00:25:58.585 slat (usec): min=26, max=44040, avg=1092.56, stdev=2207.91 00:25:58.585 clat (usec): min=931, max=195908, avg=75929.71, stdev=28773.47 00:25:58.585 lat (usec): min=987, max=195980, avg=77022.27, stdev=29148.65 00:25:58.585 clat percentiles (msec): 00:25:58.585 | 1.00th=[ 9], 5.00th=[ 30], 10.00th=[ 39], 20.00th=[ 63], 00:25:58.585 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 77], 60.00th=[ 79], 00:25:58.585 | 70.00th=[ 79], 80.00th=[ 95], 90.00th=[ 111], 95.00th=[ 128], 00:25:58.585 | 99.00th=[ 165], 99.50th=[ 180], 99.90th=[ 197], 99.95th=[ 197], 00:25:58.585 | 99.99th=[ 197] 00:25:58.585 bw ( KiB/s): min=114688, max=381440, per=12.54%, avg=211814.40, stdev=66208.78, samples=20 00:25:58.585 iops : min= 448, max= 1490, avg=827.40, stdev=258.63, samples=20 00:25:58.585 lat (usec) : 1000=0.02% 00:25:58.585 lat (msec) : 2=0.10%, 4=0.29%, 10=0.80%, 20=1.31%, 50=15.64% 00:25:58.585 lat (msec) : 100=65.31%, 250=16.53% 00:25:58.585 cpu : usr=2.01%, sys=2.43%, ctx=2975, majf=0, minf=1 00:25:58.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:58.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:58.585 issued rwts: total=0,8337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:58.585 job10: (groupid=0, jobs=1): err= 0: pid=1475541: Sat Jul 13 00:51:09 2024 00:25:58.585 write: IOPS=576, BW=144MiB/s (151MB/s)(1467MiB/10170msec); 0 zone resets 00:25:58.585 slat (usec): min=29, max=83492, avg=1395.28, stdev=3257.54 00:25:58.585 clat (msec): min=2, max=306, avg=109.51, stdev=43.92 00:25:58.585 lat (msec): min=2, max=306, avg=110.91, stdev=44.46 00:25:58.585 clat percentiles (msec): 00:25:58.585 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 53], 20.00th=[ 72], 00:25:58.585 | 30.00th=[ 99], 40.00th=[ 105], 50.00th=[ 107], 60.00th=[ 120], 00:25:58.585 | 70.00th=[ 130], 80.00th=[ 140], 90.00th=[ 159], 95.00th=[ 192], 00:25:58.585 | 99.00th=[ 207], 99.50th=[ 218], 99.90th=[ 296], 99.95th=[ 300], 00:25:58.585 | 99.99th=[ 309] 00:25:58.585 bw ( KiB/s): min=81920, max=205824, per=8.80%, avg=148567.85, stdev=32635.67, samples=20 00:25:58.585 iops : min= 320, max= 804, avg=580.30, stdev=127.54, samples=20 00:25:58.585 lat (msec) : 4=0.22%, 10=0.80%, 20=3.24%, 50=5.18%, 100=24.99% 00:25:58.585 lat (msec) : 250=65.38%, 500=0.19% 00:25:58.585 cpu : usr=1.43%, sys=1.89%, ctx=2698, majf=0, minf=1 00:25:58.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:58.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:58.585 issued rwts: total=0,5866,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:58.585 00:25:58.585 Run status group 0 (all jobs): 00:25:58.585 WRITE: bw=1649MiB/s (1729MB/s), 129MiB/s-208MiB/s (135MB/s-218MB/s), io=16.4GiB (17.6GB), run=10036-10181msec 00:25:58.585 00:25:58.585 Disk stats (read/write): 00:25:58.585 nvme0n1: ios=49/12280, merge=0/0, ticks=1962/1231563, in_queue=1233525, util=99.82% 00:25:58.585 nvme10n1: ios=22/13411, merge=0/0, ticks=33/1229258, in_queue=1229291, util=94.91% 00:25:58.585 nvme1n1: ios=38/10772, merge=0/0, ticks=1125/1202764, in_queue=1203889, util=99.94% 00:25:58.585 nvme2n1: ios=0/12178, merge=0/0, ticks=0/1211347, in_queue=1211347, util=95.65% 00:25:58.585 nvme3n1: ios=44/10441, merge=0/0, ticks=2702/1230895, in_queue=1233597, util=99.91% 00:25:58.585 nvme4n1: ios=0/10659, merge=0/0, ticks=0/1201773, in_queue=1201773, util=96.65% 00:25:58.585 nvme5n1: ios=0/11771, merge=0/0, ticks=0/1236645, in_queue=1236645, util=97.10% 00:25:58.585 nvme6n1: ios=41/12378, merge=0/0, ticks=2087/1222838, in_queue=1224925, util=99.95% 00:25:58.585 nvme7n1: ios=0/10733, merge=0/0, ticks=0/1232564, in_queue=1232564, util=98.37% 00:25:58.585 nvme8n1: ios=34/16123, merge=0/0, ticks=709/1204423, in_queue=1205132, util=100.00% 00:25:58.585 nvme9n1: ios=0/11693, merge=0/0, ticks=0/1236032, in_queue=1236032, util=99.05% 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:58.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:58.585 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.585 00:51:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.585 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.585 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.585 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:58.844 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:58.844 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:58.844 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:58.844 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:58.844 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:58.844 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:58.844 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:58.844 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:58.844 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:58.844 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.844 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:58.844 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.844 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.844 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:59.102 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:59.102 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:59.102 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:59.102 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:59.102 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:59.102 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:59.102 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:59.102 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:59.102 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:59.102 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.102 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.102 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.102 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.102 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:59.361 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:59.361 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:59.361 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:59.361 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:59.361 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:59.361 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:59.361 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:59.361 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:59.361 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:59.361 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.361 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.361 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.361 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.361 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:59.619 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:59.619 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:59.619 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:59.619 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:59.619 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:59.619 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:59.619 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:59.619 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:59.619 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:59.619 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.619 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.619 00:51:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.619 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.619 00:51:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:59.877 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:59.877 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:59.877 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:00.136 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.136 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:00.395 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:00.395 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:00.395 00:51:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:00.395 rmmod nvme_tcp 00:26:00.654 rmmod nvme_fabrics 00:26:00.654 rmmod nvme_keyring 00:26:00.654 00:51:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:00.654 00:51:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:26:00.654 00:51:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:26:00.654 00:51:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1467544 ']' 00:26:00.654 00:51:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1467544 00:26:00.654 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1467544 ']' 00:26:00.654 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1467544 00:26:00.654 00:51:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:26:00.654 00:51:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:00.654 00:51:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1467544 00:26:00.654 00:51:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:00.654 00:51:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:00.654 00:51:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1467544' 00:26:00.654 killing process with pid 1467544 00:26:00.654 00:51:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1467544 00:26:00.654 00:51:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1467544 00:26:00.912 00:51:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:00.912 00:51:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:00.912 00:51:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:00.912 00:51:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:00.912 00:51:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:00.912 00:51:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.912 00:51:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:00.912 00:51:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.527 00:51:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:03.527 00:26:03.527 real 1m9.969s 00:26:03.527 user 4m7.590s 00:26:03.527 sys 0m24.973s 00:26:03.527 00:51:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:03.527 00:51:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:03.527 ************************************ 00:26:03.527 END TEST nvmf_multiconnection 00:26:03.527 ************************************ 00:26:03.527 00:51:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:03.527 00:51:14 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:03.527 00:51:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:03.527 00:51:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:03.527 00:51:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:03.527 ************************************ 00:26:03.527 START TEST nvmf_initiator_timeout 00:26:03.527 ************************************ 00:26:03.527 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:03.527 * Looking for test storage... 00:26:03.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:03.527 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:03.527 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:03.527 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:03.527 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:26:03.528 00:51:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:08.805 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:08.806 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:08.806 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:08.806 Found net devices under 0000:86:00.0: cvl_0_0 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:08.806 Found net devices under 0000:86:00.1: cvl_0_1 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:08.806 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:09.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:09.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:26:09.066 00:26:09.066 --- 10.0.0.2 ping statistics --- 00:26:09.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.066 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:09.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:09.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:26:09.066 00:26:09.066 --- 10.0.0.1 ping statistics --- 00:26:09.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.066 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1481487 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1481487 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1481487 ']' 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:09.066 00:51:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.066 [2024-07-13 00:51:20.531443] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:26:09.066 [2024-07-13 00:51:20.531484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.066 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.066 [2024-07-13 00:51:20.600034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:09.325 [2024-07-13 00:51:20.641256] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.325 [2024-07-13 00:51:20.641288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.325 [2024-07-13 00:51:20.641295] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.325 [2024-07-13 00:51:20.641302] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.325 [2024-07-13 00:51:20.641307] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.325 [2024-07-13 00:51:20.641366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.325 [2024-07-13 00:51:20.641474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:09.325 [2024-07-13 00:51:20.641580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.326 [2024-07-13 00:51:20.641581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.893 Malloc0 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.893 Delay0 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.893 [2024-07-13 00:51:21.406489] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.893 [2024-07-13 00:51:21.431359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.893 00:51:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:11.273 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:11.273 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:11.273 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:11.273 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:11.273 00:51:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:13.175 00:51:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:13.175 00:51:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:13.175 00:51:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:13.175 00:51:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:13.175 00:51:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:13.175 00:51:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:13.175 00:51:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1482206 00:26:13.175 00:51:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:13.175 00:51:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:13.175 [global] 00:26:13.175 thread=1 00:26:13.175 invalidate=1 00:26:13.175 rw=write 00:26:13.175 time_based=1 00:26:13.175 runtime=60 00:26:13.175 ioengine=libaio 00:26:13.175 direct=1 00:26:13.175 bs=4096 00:26:13.175 iodepth=1 00:26:13.175 norandommap=0 00:26:13.175 numjobs=1 00:26:13.175 00:26:13.175 verify_dump=1 00:26:13.175 verify_backlog=512 00:26:13.175 verify_state_save=0 00:26:13.175 do_verify=1 00:26:13.175 verify=crc32c-intel 00:26:13.175 [job0] 00:26:13.175 filename=/dev/nvme0n1 00:26:13.175 Could not set queue depth (nvme0n1) 00:26:13.433 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:13.433 fio-3.35 00:26:13.433 Starting 1 thread 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:16.771 true 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:16.771 true 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:16.771 true 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:16.771 true 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.771 00:51:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:19.299 true 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:19.299 true 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:19.299 true 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:19.299 true 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:19.299 00:51:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1482206 00:27:15.546 00:27:15.546 job0: (groupid=0, jobs=1): err= 0: pid=1482333: Sat Jul 13 00:52:25 2024 00:27:15.546 read: IOPS=276, BW=1104KiB/s (1131kB/s)(64.8MiB/60037msec) 00:27:15.546 slat (usec): min=6, max=16659, avg= 9.18, stdev=146.41 00:27:15.546 clat (usec): min=208, max=41561k, avg=3409.03, stdev=322843.68 00:27:15.546 lat (usec): min=217, max=41561k, avg=3418.21, stdev=322843.73 00:27:15.546 clat percentiles (usec): 00:27:15.546 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 245], 00:27:15.546 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 260], 00:27:15.546 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:27:15.546 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:27:15.546 | 99.99th=[42206] 00:27:15.546 write: IOPS=281, BW=1126KiB/s (1153kB/s)(66.0MiB/60037msec); 0 zone resets 00:27:15.546 slat (usec): min=9, max=27560, avg=11.98, stdev=211.95 00:27:15.546 clat (usec): min=147, max=401, avg=183.41, stdev=14.21 00:27:15.546 lat (usec): min=158, max=27916, avg=195.38, stdev=213.75 00:27:15.546 clat percentiles (usec): 00:27:15.546 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 174], 00:27:15.546 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 184], 00:27:15.546 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 206], 00:27:15.546 | 99.00th=[ 225], 99.50th=[ 233], 99.90th=[ 302], 99.95th=[ 338], 00:27:15.546 | 99.99th=[ 400] 00:27:15.546 bw ( KiB/s): min= 600, max= 9496, per=100.00%, avg=6436.57, stdev=2999.96, samples=21 00:27:15.546 iops : min= 150, max= 2374, avg=1609.14, stdev=749.99, samples=21 00:27:15.546 lat (usec) : 250=68.70%, 500=30.48%, 750=0.03% 00:27:15.546 lat (msec) : 2=0.01%, 50=0.78%, >=2000=0.01% 00:27:15.546 cpu : usr=0.28%, sys=0.53%, ctx=33477, majf=0, minf=2 00:27:15.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:15.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.546 issued rwts: total=16576,16896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:15.546 00:27:15.546 Run status group 0 (all jobs): 00:27:15.546 READ: bw=1104KiB/s (1131kB/s), 1104KiB/s-1104KiB/s (1131kB/s-1131kB/s), io=64.8MiB (67.9MB), run=60037-60037msec 00:27:15.546 WRITE: bw=1126KiB/s (1153kB/s), 1126KiB/s-1126KiB/s (1153kB/s-1153kB/s), io=66.0MiB (69.2MB), run=60037-60037msec 00:27:15.546 00:27:15.546 Disk stats (read/write): 00:27:15.546 nvme0n1: ios=16673/16896, merge=0/0, ticks=16647/3011, in_queue=19658, util=100.00% 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:15.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:15.546 nvmf hotplug test: fio successful as expected 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:15.546 rmmod nvme_tcp 00:27:15.546 rmmod nvme_fabrics 00:27:15.546 rmmod nvme_keyring 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1481487 ']' 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1481487 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1481487 ']' 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1481487 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1481487 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1481487' 00:27:15.546 killing process with pid 1481487 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1481487 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1481487 00:27:15.546 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:15.547 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:15.547 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:15.547 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:15.547 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:15.547 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.547 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:15.547 00:52:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.115 00:52:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:16.115 00:27:16.115 real 1m12.969s 00:27:16.115 user 4m24.688s 00:27:16.115 sys 0m6.672s 00:27:16.115 00:52:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:16.115 00:52:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:16.115 ************************************ 00:27:16.115 END TEST nvmf_initiator_timeout 00:27:16.115 ************************************ 00:27:16.115 00:52:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:16.115 00:52:27 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:16.115 00:52:27 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:27:16.115 00:52:27 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:27:16.115 00:52:27 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:27:16.115 00:52:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.470 00:52:32 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:21.470 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:21.470 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:21.470 Found net devices under 0000:86:00.0: cvl_0_0 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:21.470 Found net devices under 0000:86:00.1: cvl_0_1 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:21.470 00:52:33 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:21.470 00:52:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:21.470 00:52:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.470 00:52:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:21.729 ************************************ 00:27:21.729 START TEST nvmf_perf_adq 00:27:21.729 ************************************ 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:21.729 * Looking for test storage... 00:27:21.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:21.729 00:52:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:28.302 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:28.302 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:28.302 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:28.303 Found net devices under 0000:86:00.0: cvl_0_0 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:28.303 Found net devices under 0000:86:00.1: cvl_0_1 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:28.303 00:52:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:28.303 00:52:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:30.209 00:52:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:35.478 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:35.479 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:35.479 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:35.479 Found net devices under 0000:86:00.0: cvl_0_0 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:35.479 Found net devices under 0000:86:00.1: cvl_0_1 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:35.479 00:52:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:35.479 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:35.479 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:35.479 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:35.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:35.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:27:35.479 00:27:35.479 --- 10.0.0.2 ping statistics --- 00:27:35.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.479 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:27:35.479 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:35.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:35.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:27:35.479 00:27:35.479 --- 10.0.0.1 ping statistics --- 00:27:35.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.479 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1499786 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1499786 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1499786 ']' 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.738 [2024-07-13 00:52:47.126863] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:35.738 [2024-07-13 00:52:47.126911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.738 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.738 [2024-07-13 00:52:47.200947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:35.738 [2024-07-13 00:52:47.242982] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.738 [2024-07-13 00:52:47.243024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.738 [2024-07-13 00:52:47.243031] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.738 [2024-07-13 00:52:47.243036] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.738 [2024-07-13 00:52:47.243046] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.738 [2024-07-13 00:52:47.243101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.738 [2024-07-13 00:52:47.243212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:35.738 [2024-07-13 00:52:47.243318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.738 [2024-07-13 00:52:47.243317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:35.738 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.997 [2024-07-13 00:52:47.451981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.997 Malloc1 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.997 [2024-07-13 00:52:47.495550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1499919 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:35.997 00:52:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:35.997 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.528 00:52:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:38.528 00:52:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.528 00:52:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:38.528 00:52:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.528 00:52:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:38.528 "tick_rate": 2300000000, 00:27:38.528 "poll_groups": [ 00:27:38.528 { 00:27:38.528 "name": "nvmf_tgt_poll_group_000", 00:27:38.528 "admin_qpairs": 1, 00:27:38.528 "io_qpairs": 1, 00:27:38.528 "current_admin_qpairs": 1, 00:27:38.528 "current_io_qpairs": 1, 00:27:38.528 "pending_bdev_io": 0, 00:27:38.528 "completed_nvme_io": 21172, 00:27:38.528 "transports": [ 00:27:38.528 { 00:27:38.528 "trtype": "TCP" 00:27:38.528 } 00:27:38.528 ] 00:27:38.528 }, 00:27:38.528 { 00:27:38.528 "name": "nvmf_tgt_poll_group_001", 00:27:38.528 "admin_qpairs": 0, 00:27:38.528 "io_qpairs": 1, 00:27:38.528 "current_admin_qpairs": 0, 00:27:38.528 "current_io_qpairs": 1, 00:27:38.528 "pending_bdev_io": 0, 00:27:38.528 "completed_nvme_io": 21386, 00:27:38.528 "transports": [ 00:27:38.528 { 00:27:38.528 "trtype": "TCP" 00:27:38.528 } 00:27:38.528 ] 00:27:38.528 }, 00:27:38.528 { 00:27:38.528 "name": "nvmf_tgt_poll_group_002", 00:27:38.528 "admin_qpairs": 0, 00:27:38.528 "io_qpairs": 1, 00:27:38.528 "current_admin_qpairs": 0, 00:27:38.528 "current_io_qpairs": 1, 00:27:38.528 "pending_bdev_io": 0, 00:27:38.528 "completed_nvme_io": 21284, 00:27:38.528 "transports": [ 00:27:38.528 { 00:27:38.528 "trtype": "TCP" 00:27:38.528 } 00:27:38.528 ] 00:27:38.528 }, 00:27:38.528 { 00:27:38.528 "name": "nvmf_tgt_poll_group_003", 00:27:38.528 "admin_qpairs": 0, 00:27:38.528 "io_qpairs": 1, 00:27:38.528 "current_admin_qpairs": 0, 00:27:38.528 "current_io_qpairs": 1, 00:27:38.528 "pending_bdev_io": 0, 00:27:38.528 "completed_nvme_io": 21004, 00:27:38.528 "transports": [ 00:27:38.528 { 00:27:38.528 "trtype": "TCP" 00:27:38.528 } 00:27:38.528 ] 00:27:38.528 } 00:27:38.528 ] 00:27:38.528 }' 00:27:38.528 00:52:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:38.528 00:52:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:38.528 00:52:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:38.528 00:52:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:38.528 00:52:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1499919 00:27:46.642 Initializing NVMe Controllers 00:27:46.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:46.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:46.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:46.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:46.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:46.642 Initialization complete. Launching workers. 00:27:46.642 ======================================================== 00:27:46.642 Latency(us) 00:27:46.642 Device Information : IOPS MiB/s Average min max 00:27:46.642 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10829.80 42.30 5909.50 1712.77 9492.27 00:27:46.642 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11088.10 43.31 5771.92 2374.45 9991.17 00:27:46.642 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10990.00 42.93 5824.47 1549.92 10590.36 00:27:46.642 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10948.70 42.77 5845.80 2000.86 13485.15 00:27:46.642 ======================================================== 00:27:46.642 Total : 43856.58 171.31 5837.51 1549.92 13485.15 00:27:46.642 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:46.642 rmmod nvme_tcp 00:27:46.642 rmmod nvme_fabrics 00:27:46.642 rmmod nvme_keyring 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1499786 ']' 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1499786 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1499786 ']' 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1499786 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1499786 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1499786' 00:27:46.642 killing process with pid 1499786 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1499786 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1499786 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:46.642 00:52:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.548 00:52:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:48.548 00:52:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:48.548 00:52:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:49.928 00:53:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:51.832 00:53:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:57.111 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:57.111 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:57.111 Found net devices under 0000:86:00.0: cvl_0_0 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:57.111 Found net devices under 0000:86:00.1: cvl_0_1 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:57.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:57.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:27:57.111 00:27:57.111 --- 10.0.0.2 ping statistics --- 00:27:57.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.111 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:57.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:57.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:27:57.111 00:27:57.111 --- 10.0.0.1 ping statistics --- 00:27:57.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.111 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:57.111 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:57.112 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:57.112 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:57.112 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:57.112 net.core.busy_poll = 1 00:27:57.112 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:57.112 net.core.busy_read = 1 00:27:57.112 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:57.112 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:57.112 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:57.112 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:57.112 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:57.112 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:57.112 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:57.112 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:57.112 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1503498 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1503498 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1503498 ']' 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.371 [2024-07-13 00:53:08.721347] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:57.371 [2024-07-13 00:53:08.721392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.371 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.371 [2024-07-13 00:53:08.792804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:57.371 [2024-07-13 00:53:08.834682] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:57.371 [2024-07-13 00:53:08.834722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:57.371 [2024-07-13 00:53:08.834729] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:57.371 [2024-07-13 00:53:08.834735] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:57.371 [2024-07-13 00:53:08.834740] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:57.371 [2024-07-13 00:53:08.834798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.371 [2024-07-13 00:53:08.834909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.371 [2024-07-13 00:53:08.835015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.371 [2024-07-13 00:53:08.835017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.371 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.631 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:57.631 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:57.631 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.631 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.631 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.631 00:53:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:57.631 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.631 00:53:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.631 [2024-07-13 00:53:09.037068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.631 Malloc1 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:57.631 [2024-07-13 00:53:09.088823] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1503721 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:57.631 00:53:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:57.631 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.594 00:53:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:59.594 00:53:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.594 00:53:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.594 00:53:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.594 00:53:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:59.594 "tick_rate": 2300000000, 00:27:59.594 "poll_groups": [ 00:27:59.594 { 00:27:59.594 "name": "nvmf_tgt_poll_group_000", 00:27:59.594 "admin_qpairs": 1, 00:27:59.594 "io_qpairs": 3, 00:27:59.594 "current_admin_qpairs": 1, 00:27:59.594 "current_io_qpairs": 3, 00:27:59.594 "pending_bdev_io": 0, 00:27:59.594 "completed_nvme_io": 29386, 00:27:59.594 "transports": [ 00:27:59.594 { 00:27:59.594 "trtype": "TCP" 00:27:59.594 } 00:27:59.594 ] 00:27:59.594 }, 00:27:59.594 { 00:27:59.594 "name": "nvmf_tgt_poll_group_001", 00:27:59.594 "admin_qpairs": 0, 00:27:59.594 "io_qpairs": 1, 00:27:59.594 "current_admin_qpairs": 0, 00:27:59.594 "current_io_qpairs": 1, 00:27:59.594 "pending_bdev_io": 0, 00:27:59.594 "completed_nvme_io": 29090, 00:27:59.594 "transports": [ 00:27:59.594 { 00:27:59.594 "trtype": "TCP" 00:27:59.594 } 00:27:59.594 ] 00:27:59.594 }, 00:27:59.594 { 00:27:59.594 "name": "nvmf_tgt_poll_group_002", 00:27:59.594 "admin_qpairs": 0, 00:27:59.594 "io_qpairs": 0, 00:27:59.594 "current_admin_qpairs": 0, 00:27:59.594 "current_io_qpairs": 0, 00:27:59.594 "pending_bdev_io": 0, 00:27:59.594 "completed_nvme_io": 0, 00:27:59.594 "transports": [ 00:27:59.594 { 00:27:59.594 "trtype": "TCP" 00:27:59.594 } 00:27:59.594 ] 00:27:59.594 }, 00:27:59.594 { 00:27:59.594 "name": "nvmf_tgt_poll_group_003", 00:27:59.594 "admin_qpairs": 0, 00:27:59.594 "io_qpairs": 0, 00:27:59.594 "current_admin_qpairs": 0, 00:27:59.594 "current_io_qpairs": 0, 00:27:59.594 "pending_bdev_io": 0, 00:27:59.594 "completed_nvme_io": 0, 00:27:59.594 "transports": [ 00:27:59.594 { 00:27:59.594 "trtype": "TCP" 00:27:59.594 } 00:27:59.594 ] 00:27:59.594 } 00:27:59.594 ] 00:27:59.594 }' 00:27:59.594 00:53:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:59.594 00:53:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:59.853 00:53:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:59.853 00:53:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:59.853 00:53:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1503721 00:28:07.970 Initializing NVMe Controllers 00:28:07.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:07.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:07.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:07.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:07.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:07.971 Initialization complete. Launching workers. 00:28:07.971 ======================================================== 00:28:07.971 Latency(us) 00:28:07.971 Device Information : IOPS MiB/s Average min max 00:28:07.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4846.30 18.93 13239.87 1463.38 62293.68 00:28:07.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5040.70 19.69 12697.83 1650.41 61517.48 00:28:07.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5592.00 21.84 11446.32 1439.40 58641.94 00:28:07.971 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 15287.70 59.72 4185.86 1306.38 7233.73 00:28:07.971 ======================================================== 00:28:07.971 Total : 30766.70 120.18 8326.22 1306.38 62293.68 00:28:07.971 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:07.971 rmmod nvme_tcp 00:28:07.971 rmmod nvme_fabrics 00:28:07.971 rmmod nvme_keyring 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1503498 ']' 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1503498 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1503498 ']' 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1503498 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1503498 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1503498' 00:28:07.971 killing process with pid 1503498 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1503498 00:28:07.971 00:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1503498 00:28:08.230 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:08.230 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:08.230 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:08.230 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:08.230 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:08.230 00:53:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.230 00:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.230 00:53:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.524 00:53:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:11.524 00:53:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:11.524 00:28:11.524 real 0m49.589s 00:28:11.524 user 2m43.515s 00:28:11.524 sys 0m9.557s 00:28:11.524 00:53:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:11.524 00:53:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.524 ************************************ 00:28:11.524 END TEST nvmf_perf_adq 00:28:11.524 ************************************ 00:28:11.524 00:53:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:11.524 00:53:22 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:11.524 00:53:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:11.524 00:53:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:11.524 00:53:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.524 ************************************ 00:28:11.524 START TEST nvmf_shutdown 00:28:11.524 ************************************ 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:11.524 * Looking for test storage... 00:28:11.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:11.524 ************************************ 00:28:11.524 START TEST nvmf_shutdown_tc1 00:28:11.524 ************************************ 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:11.524 00:53:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:16.798 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:16.798 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.798 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:16.799 Found net devices under 0000:86:00.0: cvl_0_0 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:16.799 Found net devices under 0000:86:00.1: cvl_0_1 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.799 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:17.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:28:17.057 00:28:17.057 --- 10.0.0.2 ping statistics --- 00:28:17.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.057 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:28:17.057 00:28:17.057 --- 10.0.0.1 ping statistics --- 00:28:17.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.057 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1508941 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:17.057 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1508941 00:28:17.058 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1508941 ']' 00:28:17.058 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.058 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:17.058 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.058 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:17.058 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.316 [2024-07-13 00:53:28.650576] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:17.316 [2024-07-13 00:53:28.650621] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.316 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.316 [2024-07-13 00:53:28.721609] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:17.316 [2024-07-13 00:53:28.762777] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.316 [2024-07-13 00:53:28.762814] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.316 [2024-07-13 00:53:28.762822] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.316 [2024-07-13 00:53:28.762828] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.316 [2024-07-13 00:53:28.762833] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.316 [2024-07-13 00:53:28.762958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.316 [2024-07-13 00:53:28.763067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:17.316 [2024-07-13 00:53:28.763174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.316 [2024-07-13 00:53:28.763174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:17.316 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:17.316 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:28:17.316 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:17.316 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:17.316 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.574 [2024-07-13 00:53:28.904142] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.574 00:53:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:17.574 Malloc1 00:28:17.574 [2024-07-13 00:53:29.000099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.574 Malloc2 00:28:17.574 Malloc3 00:28:17.574 Malloc4 00:28:17.831 Malloc5 00:28:17.831 Malloc6 00:28:17.831 Malloc7 00:28:17.831 Malloc8 00:28:17.831 Malloc9 00:28:17.831 Malloc10 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1509210 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1509210 /var/tmp/bdevperf.sock 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1509210 ']' 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:18.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.089 { 00:28:18.089 "params": { 00:28:18.089 "name": "Nvme$subsystem", 00:28:18.089 "trtype": "$TEST_TRANSPORT", 00:28:18.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.089 "adrfam": "ipv4", 00:28:18.089 "trsvcid": "$NVMF_PORT", 00:28:18.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.089 "hdgst": ${hdgst:-false}, 00:28:18.089 "ddgst": ${ddgst:-false} 00:28:18.089 }, 00:28:18.089 "method": "bdev_nvme_attach_controller" 00:28:18.089 } 00:28:18.089 EOF 00:28:18.089 )") 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.089 { 00:28:18.089 "params": { 00:28:18.089 "name": "Nvme$subsystem", 00:28:18.089 "trtype": "$TEST_TRANSPORT", 00:28:18.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.089 "adrfam": "ipv4", 00:28:18.089 "trsvcid": "$NVMF_PORT", 00:28:18.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.089 "hdgst": ${hdgst:-false}, 00:28:18.089 "ddgst": ${ddgst:-false} 00:28:18.089 }, 00:28:18.089 "method": "bdev_nvme_attach_controller" 00:28:18.089 } 00:28:18.089 EOF 00:28:18.089 )") 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.089 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.089 { 00:28:18.089 "params": { 00:28:18.089 "name": "Nvme$subsystem", 00:28:18.089 "trtype": "$TEST_TRANSPORT", 00:28:18.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.089 "adrfam": "ipv4", 00:28:18.089 "trsvcid": "$NVMF_PORT", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.090 "hdgst": ${hdgst:-false}, 00:28:18.090 "ddgst": ${ddgst:-false} 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 } 00:28:18.090 EOF 00:28:18.090 )") 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.090 { 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme$subsystem", 00:28:18.090 "trtype": "$TEST_TRANSPORT", 00:28:18.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "$NVMF_PORT", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.090 "hdgst": ${hdgst:-false}, 00:28:18.090 "ddgst": ${ddgst:-false} 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 } 00:28:18.090 EOF 00:28:18.090 )") 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.090 { 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme$subsystem", 00:28:18.090 "trtype": "$TEST_TRANSPORT", 00:28:18.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "$NVMF_PORT", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.090 "hdgst": ${hdgst:-false}, 00:28:18.090 "ddgst": ${ddgst:-false} 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 } 00:28:18.090 EOF 00:28:18.090 )") 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.090 { 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme$subsystem", 00:28:18.090 "trtype": "$TEST_TRANSPORT", 00:28:18.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "$NVMF_PORT", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.090 "hdgst": ${hdgst:-false}, 00:28:18.090 "ddgst": ${ddgst:-false} 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 } 00:28:18.090 EOF 00:28:18.090 )") 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.090 { 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme$subsystem", 00:28:18.090 "trtype": "$TEST_TRANSPORT", 00:28:18.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "$NVMF_PORT", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.090 "hdgst": ${hdgst:-false}, 00:28:18.090 "ddgst": ${ddgst:-false} 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 } 00:28:18.090 EOF 00:28:18.090 )") 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.090 [2024-07-13 00:53:29.477337] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:18.090 [2024-07-13 00:53:29.477384] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.090 { 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme$subsystem", 00:28:18.090 "trtype": "$TEST_TRANSPORT", 00:28:18.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "$NVMF_PORT", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.090 "hdgst": ${hdgst:-false}, 00:28:18.090 "ddgst": ${ddgst:-false} 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 } 00:28:18.090 EOF 00:28:18.090 )") 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.090 { 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme$subsystem", 00:28:18.090 "trtype": "$TEST_TRANSPORT", 00:28:18.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "$NVMF_PORT", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.090 "hdgst": ${hdgst:-false}, 00:28:18.090 "ddgst": ${ddgst:-false} 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 } 00:28:18.090 EOF 00:28:18.090 )") 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.090 { 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme$subsystem", 00:28:18.090 "trtype": "$TEST_TRANSPORT", 00:28:18.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "$NVMF_PORT", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.090 "hdgst": ${hdgst:-false}, 00:28:18.090 "ddgst": ${ddgst:-false} 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 } 00:28:18.090 EOF 00:28:18.090 )") 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:18.090 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:18.090 00:53:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme1", 00:28:18.090 "trtype": "tcp", 00:28:18.090 "traddr": "10.0.0.2", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "4420", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:18.090 "hdgst": false, 00:28:18.090 "ddgst": false 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 },{ 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme2", 00:28:18.090 "trtype": "tcp", 00:28:18.090 "traddr": "10.0.0.2", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "4420", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:18.090 "hdgst": false, 00:28:18.090 "ddgst": false 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 },{ 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme3", 00:28:18.090 "trtype": "tcp", 00:28:18.090 "traddr": "10.0.0.2", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "4420", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:18.090 "hdgst": false, 00:28:18.090 "ddgst": false 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 },{ 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme4", 00:28:18.090 "trtype": "tcp", 00:28:18.090 "traddr": "10.0.0.2", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "4420", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:18.090 "hdgst": false, 00:28:18.090 "ddgst": false 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 },{ 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme5", 00:28:18.090 "trtype": "tcp", 00:28:18.090 "traddr": "10.0.0.2", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "4420", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:18.090 "hdgst": false, 00:28:18.090 "ddgst": false 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 },{ 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme6", 00:28:18.090 "trtype": "tcp", 00:28:18.090 "traddr": "10.0.0.2", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "4420", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:18.090 "hdgst": false, 00:28:18.090 "ddgst": false 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 },{ 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme7", 00:28:18.090 "trtype": "tcp", 00:28:18.090 "traddr": "10.0.0.2", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "4420", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:18.090 "hdgst": false, 00:28:18.090 "ddgst": false 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 },{ 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme8", 00:28:18.090 "trtype": "tcp", 00:28:18.090 "traddr": "10.0.0.2", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "4420", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:18.090 "hdgst": false, 00:28:18.090 "ddgst": false 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 },{ 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme9", 00:28:18.090 "trtype": "tcp", 00:28:18.090 "traddr": "10.0.0.2", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "4420", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:18.090 "hdgst": false, 00:28:18.090 "ddgst": false 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 },{ 00:28:18.090 "params": { 00:28:18.090 "name": "Nvme10", 00:28:18.090 "trtype": "tcp", 00:28:18.090 "traddr": "10.0.0.2", 00:28:18.090 "adrfam": "ipv4", 00:28:18.090 "trsvcid": "4420", 00:28:18.090 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:18.090 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:18.090 "hdgst": false, 00:28:18.090 "ddgst": false 00:28:18.090 }, 00:28:18.090 "method": "bdev_nvme_attach_controller" 00:28:18.090 }' 00:28:18.090 [2024-07-13 00:53:29.548284] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.090 [2024-07-13 00:53:29.587948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.511 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:19.511 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:28:19.511 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:19.511 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.511 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:19.511 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.511 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1509210 00:28:19.511 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:28:19.511 00:53:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:28:20.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1509210 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:20.448 00:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1508941 00:28:20.448 00:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:20.448 00:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:20.448 00:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:20.448 00:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:20.448 00:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.448 00:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.448 { 00:28:20.448 "params": { 00:28:20.448 "name": "Nvme$subsystem", 00:28:20.448 "trtype": "$TEST_TRANSPORT", 00:28:20.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.448 "adrfam": "ipv4", 00:28:20.448 "trsvcid": "$NVMF_PORT", 00:28:20.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.448 "hdgst": ${hdgst:-false}, 00:28:20.448 "ddgst": ${ddgst:-false} 00:28:20.448 }, 00:28:20.448 "method": "bdev_nvme_attach_controller" 00:28:20.448 } 00:28:20.448 EOF 00:28:20.448 )") 00:28:20.448 00:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:20.448 00:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.448 00:53:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.448 { 00:28:20.448 "params": { 00:28:20.448 "name": "Nvme$subsystem", 00:28:20.448 "trtype": "$TEST_TRANSPORT", 00:28:20.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.448 "adrfam": "ipv4", 00:28:20.448 "trsvcid": "$NVMF_PORT", 00:28:20.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.448 "hdgst": ${hdgst:-false}, 00:28:20.448 "ddgst": ${ddgst:-false} 00:28:20.448 }, 00:28:20.448 "method": "bdev_nvme_attach_controller" 00:28:20.448 } 00:28:20.448 EOF 00:28:20.448 )") 00:28:20.448 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:20.708 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.708 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.708 { 00:28:20.708 "params": { 00:28:20.708 "name": "Nvme$subsystem", 00:28:20.708 "trtype": "$TEST_TRANSPORT", 00:28:20.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.708 "adrfam": "ipv4", 00:28:20.708 "trsvcid": "$NVMF_PORT", 00:28:20.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.708 "hdgst": ${hdgst:-false}, 00:28:20.708 "ddgst": ${ddgst:-false} 00:28:20.708 }, 00:28:20.708 "method": "bdev_nvme_attach_controller" 00:28:20.708 } 00:28:20.708 EOF 00:28:20.708 )") 00:28:20.708 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:20.708 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.708 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.708 { 00:28:20.708 "params": { 00:28:20.708 "name": "Nvme$subsystem", 00:28:20.708 "trtype": "$TEST_TRANSPORT", 00:28:20.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.708 "adrfam": "ipv4", 00:28:20.708 "trsvcid": "$NVMF_PORT", 00:28:20.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.708 "hdgst": ${hdgst:-false}, 00:28:20.708 "ddgst": ${ddgst:-false} 00:28:20.708 }, 00:28:20.708 "method": "bdev_nvme_attach_controller" 00:28:20.708 } 00:28:20.708 EOF 00:28:20.708 )") 00:28:20.708 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:20.708 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.708 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.708 { 00:28:20.708 "params": { 00:28:20.708 "name": "Nvme$subsystem", 00:28:20.708 "trtype": "$TEST_TRANSPORT", 00:28:20.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.708 "adrfam": "ipv4", 00:28:20.708 "trsvcid": "$NVMF_PORT", 00:28:20.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.708 "hdgst": ${hdgst:-false}, 00:28:20.708 "ddgst": ${ddgst:-false} 00:28:20.708 }, 00:28:20.708 "method": "bdev_nvme_attach_controller" 00:28:20.708 } 00:28:20.708 EOF 00:28:20.708 )") 00:28:20.708 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:20.708 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.708 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.708 { 00:28:20.708 "params": { 00:28:20.708 "name": "Nvme$subsystem", 00:28:20.708 "trtype": "$TEST_TRANSPORT", 00:28:20.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.708 "adrfam": "ipv4", 00:28:20.708 "trsvcid": "$NVMF_PORT", 00:28:20.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.708 "hdgst": ${hdgst:-false}, 00:28:20.708 "ddgst": ${ddgst:-false} 00:28:20.708 }, 00:28:20.708 "method": "bdev_nvme_attach_controller" 00:28:20.708 } 00:28:20.708 EOF 00:28:20.708 )") 00:28:20.708 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:20.708 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.708 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.708 { 00:28:20.708 "params": { 00:28:20.708 "name": "Nvme$subsystem", 00:28:20.708 "trtype": "$TEST_TRANSPORT", 00:28:20.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.708 "adrfam": "ipv4", 00:28:20.708 "trsvcid": "$NVMF_PORT", 00:28:20.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.708 "hdgst": ${hdgst:-false}, 00:28:20.708 "ddgst": ${ddgst:-false} 00:28:20.708 }, 00:28:20.708 "method": "bdev_nvme_attach_controller" 00:28:20.708 } 00:28:20.708 EOF 00:28:20.709 )") 00:28:20.709 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:20.709 [2024-07-13 00:53:32.036686] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:20.709 [2024-07-13 00:53:32.036734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509590 ] 00:28:20.709 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.709 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.709 { 00:28:20.709 "params": { 00:28:20.709 "name": "Nvme$subsystem", 00:28:20.709 "trtype": "$TEST_TRANSPORT", 00:28:20.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.709 "adrfam": "ipv4", 00:28:20.709 "trsvcid": "$NVMF_PORT", 00:28:20.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.709 "hdgst": ${hdgst:-false}, 00:28:20.709 "ddgst": ${ddgst:-false} 00:28:20.709 }, 00:28:20.709 "method": "bdev_nvme_attach_controller" 00:28:20.709 } 00:28:20.709 EOF 00:28:20.709 )") 00:28:20.709 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:20.709 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.709 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.709 { 00:28:20.709 "params": { 00:28:20.709 "name": "Nvme$subsystem", 00:28:20.709 "trtype": "$TEST_TRANSPORT", 00:28:20.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.709 "adrfam": "ipv4", 00:28:20.709 "trsvcid": "$NVMF_PORT", 00:28:20.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.709 "hdgst": ${hdgst:-false}, 00:28:20.709 "ddgst": ${ddgst:-false} 00:28:20.709 }, 00:28:20.709 "method": "bdev_nvme_attach_controller" 00:28:20.709 } 00:28:20.709 EOF 00:28:20.709 )") 00:28:20.709 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:20.709 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.709 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.709 { 00:28:20.709 "params": { 00:28:20.709 "name": "Nvme$subsystem", 00:28:20.709 "trtype": "$TEST_TRANSPORT", 00:28:20.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.709 "adrfam": "ipv4", 00:28:20.709 "trsvcid": "$NVMF_PORT", 00:28:20.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.709 "hdgst": ${hdgst:-false}, 00:28:20.709 "ddgst": ${ddgst:-false} 00:28:20.709 }, 00:28:20.709 "method": "bdev_nvme_attach_controller" 00:28:20.709 } 00:28:20.709 EOF 00:28:20.709 )") 00:28:20.709 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:20.709 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:20.709 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:20.709 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.709 00:53:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:20.709 "params": { 00:28:20.709 "name": "Nvme1", 00:28:20.709 "trtype": "tcp", 00:28:20.709 "traddr": "10.0.0.2", 00:28:20.709 "adrfam": "ipv4", 00:28:20.709 "trsvcid": "4420", 00:28:20.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:20.709 "hdgst": false, 00:28:20.709 "ddgst": false 00:28:20.709 }, 00:28:20.709 "method": "bdev_nvme_attach_controller" 00:28:20.709 },{ 00:28:20.709 "params": { 00:28:20.709 "name": "Nvme2", 00:28:20.709 "trtype": "tcp", 00:28:20.709 "traddr": "10.0.0.2", 00:28:20.709 "adrfam": "ipv4", 00:28:20.709 "trsvcid": "4420", 00:28:20.709 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:20.709 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:20.709 "hdgst": false, 00:28:20.709 "ddgst": false 00:28:20.709 }, 00:28:20.709 "method": "bdev_nvme_attach_controller" 00:28:20.709 },{ 00:28:20.709 "params": { 00:28:20.709 "name": "Nvme3", 00:28:20.709 "trtype": "tcp", 00:28:20.709 "traddr": "10.0.0.2", 00:28:20.709 "adrfam": "ipv4", 00:28:20.709 "trsvcid": "4420", 00:28:20.709 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:20.709 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:20.709 "hdgst": false, 00:28:20.709 "ddgst": false 00:28:20.709 }, 00:28:20.709 "method": "bdev_nvme_attach_controller" 00:28:20.709 },{ 00:28:20.709 "params": { 00:28:20.709 "name": "Nvme4", 00:28:20.709 "trtype": "tcp", 00:28:20.709 "traddr": "10.0.0.2", 00:28:20.709 "adrfam": "ipv4", 00:28:20.709 "trsvcid": "4420", 00:28:20.709 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:20.709 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:20.709 "hdgst": false, 00:28:20.709 "ddgst": false 00:28:20.709 }, 00:28:20.709 "method": "bdev_nvme_attach_controller" 00:28:20.709 },{ 00:28:20.709 "params": { 00:28:20.709 "name": "Nvme5", 00:28:20.709 "trtype": "tcp", 00:28:20.709 "traddr": "10.0.0.2", 00:28:20.709 "adrfam": "ipv4", 00:28:20.709 "trsvcid": "4420", 00:28:20.709 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:20.709 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:20.709 "hdgst": false, 00:28:20.709 "ddgst": false 00:28:20.709 }, 00:28:20.709 "method": "bdev_nvme_attach_controller" 00:28:20.709 },{ 00:28:20.709 "params": { 00:28:20.709 "name": "Nvme6", 00:28:20.709 "trtype": "tcp", 00:28:20.709 "traddr": "10.0.0.2", 00:28:20.709 "adrfam": "ipv4", 00:28:20.709 "trsvcid": "4420", 00:28:20.709 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:20.709 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:20.709 "hdgst": false, 00:28:20.709 "ddgst": false 00:28:20.709 }, 00:28:20.709 "method": "bdev_nvme_attach_controller" 00:28:20.709 },{ 00:28:20.709 "params": { 00:28:20.709 "name": "Nvme7", 00:28:20.709 "trtype": "tcp", 00:28:20.709 "traddr": "10.0.0.2", 00:28:20.709 "adrfam": "ipv4", 00:28:20.709 "trsvcid": "4420", 00:28:20.709 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:20.709 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:20.709 "hdgst": false, 00:28:20.709 "ddgst": false 00:28:20.709 }, 00:28:20.709 "method": "bdev_nvme_attach_controller" 00:28:20.709 },{ 00:28:20.709 "params": { 00:28:20.709 "name": "Nvme8", 00:28:20.709 "trtype": "tcp", 00:28:20.709 "traddr": "10.0.0.2", 00:28:20.709 "adrfam": "ipv4", 00:28:20.709 "trsvcid": "4420", 00:28:20.709 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:20.709 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:20.709 "hdgst": false, 00:28:20.709 "ddgst": false 00:28:20.709 }, 00:28:20.709 "method": "bdev_nvme_attach_controller" 00:28:20.709 },{ 00:28:20.709 "params": { 00:28:20.709 "name": "Nvme9", 00:28:20.709 "trtype": "tcp", 00:28:20.709 "traddr": "10.0.0.2", 00:28:20.709 "adrfam": "ipv4", 00:28:20.709 "trsvcid": "4420", 00:28:20.709 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:20.709 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:20.709 "hdgst": false, 00:28:20.709 "ddgst": false 00:28:20.709 }, 00:28:20.709 "method": "bdev_nvme_attach_controller" 00:28:20.709 },{ 00:28:20.709 "params": { 00:28:20.709 "name": "Nvme10", 00:28:20.709 "trtype": "tcp", 00:28:20.709 "traddr": "10.0.0.2", 00:28:20.709 "adrfam": "ipv4", 00:28:20.709 "trsvcid": "4420", 00:28:20.709 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:20.709 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:20.709 "hdgst": false, 00:28:20.709 "ddgst": false 00:28:20.709 }, 00:28:20.709 "method": "bdev_nvme_attach_controller" 00:28:20.709 }' 00:28:20.709 [2024-07-13 00:53:32.108730] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.709 [2024-07-13 00:53:32.150184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.085 Running I/O for 1 seconds... 00:28:23.021 00:28:23.021 Latency(us) 00:28:23.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.021 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.021 Verification LBA range: start 0x0 length 0x400 00:28:23.021 Nvme1n1 : 1.15 278.96 17.43 0.00 0.00 227480.44 19147.91 227951.30 00:28:23.021 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.021 Verification LBA range: start 0x0 length 0x400 00:28:23.021 Nvme2n1 : 1.15 277.60 17.35 0.00 0.00 225386.14 16754.42 228863.11 00:28:23.021 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.021 Verification LBA range: start 0x0 length 0x400 00:28:23.021 Nvme3n1 : 1.07 309.08 19.32 0.00 0.00 195330.24 9232.03 215186.03 00:28:23.021 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.021 Verification LBA range: start 0x0 length 0x400 00:28:23.021 Nvme4n1 : 1.16 276.13 17.26 0.00 0.00 220263.56 14189.97 217009.64 00:28:23.021 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.021 Verification LBA range: start 0x0 length 0x400 00:28:23.021 Nvme5n1 : 1.08 242.36 15.15 0.00 0.00 245082.31 5014.93 206067.98 00:28:23.021 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.021 Verification LBA range: start 0x0 length 0x400 00:28:23.021 Nvme6n1 : 1.16 274.84 17.18 0.00 0.00 215041.96 15842.62 219745.06 00:28:23.021 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.021 Verification LBA range: start 0x0 length 0x400 00:28:23.021 Nvme7n1 : 1.16 276.74 17.30 0.00 0.00 210335.65 15842.62 203332.56 00:28:23.021 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.021 Verification LBA range: start 0x0 length 0x400 00:28:23.021 Nvme8n1 : 1.17 327.74 20.48 0.00 0.00 174593.26 13791.05 198773.54 00:28:23.021 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.021 Verification LBA range: start 0x0 length 0x400 00:28:23.021 Nvme9n1 : 1.17 273.81 17.11 0.00 0.00 206446.24 18692.01 232510.33 00:28:23.021 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:23.021 Verification LBA range: start 0x0 length 0x400 00:28:23.021 Nvme10n1 : 1.17 272.40 17.03 0.00 0.00 204516.31 17438.27 233422.14 00:28:23.021 =================================================================================================================== 00:28:23.021 Total : 2809.66 175.60 0.00 0.00 211032.15 5014.93 233422.14 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:23.280 rmmod nvme_tcp 00:28:23.280 rmmod nvme_fabrics 00:28:23.280 rmmod nvme_keyring 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1508941 ']' 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1508941 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1508941 ']' 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1508941 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1508941 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1508941' 00:28:23.280 killing process with pid 1508941 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1508941 00:28:23.280 00:53:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1508941 00:28:23.848 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:23.848 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:23.848 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:23.848 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:23.848 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:23.848 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.848 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.848 00:53:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.753 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:25.753 00:28:25.753 real 0m14.406s 00:28:25.753 user 0m30.843s 00:28:25.753 sys 0m5.657s 00:28:25.753 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:25.753 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:25.753 ************************************ 00:28:25.753 END TEST nvmf_shutdown_tc1 00:28:25.753 ************************************ 00:28:25.753 00:53:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:25.753 00:53:37 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:25.753 00:53:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:25.753 00:53:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:25.753 00:53:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:26.012 ************************************ 00:28:26.012 START TEST nvmf_shutdown_tc2 00:28:26.012 ************************************ 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:26.012 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:26.012 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:26.012 Found net devices under 0000:86:00.0: cvl_0_0 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:26.012 Found net devices under 0000:86:00.1: cvl_0_1 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.012 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:26.013 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:26.013 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.013 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.013 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.013 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.013 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:26.013 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:26.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:28:26.272 00:28:26.272 --- 10.0.0.2 ping statistics --- 00:28:26.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.272 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:28:26.272 00:28:26.272 --- 10.0.0.1 ping statistics --- 00:28:26.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.272 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1510538 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1510538 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1510538 ']' 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:26.272 00:53:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:26.272 [2024-07-13 00:53:37.694473] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:26.272 [2024-07-13 00:53:37.694515] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.272 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.272 [2024-07-13 00:53:37.767536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:26.272 [2024-07-13 00:53:37.809151] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.272 [2024-07-13 00:53:37.809191] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.272 [2024-07-13 00:53:37.809198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.272 [2024-07-13 00:53:37.809204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.272 [2024-07-13 00:53:37.809208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.272 [2024-07-13 00:53:37.809328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.272 [2024-07-13 00:53:37.809442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.272 [2024-07-13 00:53:37.809549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.272 [2024-07-13 00:53:37.809551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.209 [2024-07-13 00:53:38.536334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.209 00:53:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.209 Malloc1 00:28:27.209 [2024-07-13 00:53:38.632348] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.209 Malloc2 00:28:27.209 Malloc3 00:28:27.209 Malloc4 00:28:27.481 Malloc5 00:28:27.481 Malloc6 00:28:27.481 Malloc7 00:28:27.481 Malloc8 00:28:27.481 Malloc9 00:28:27.481 Malloc10 00:28:27.481 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.481 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:27.481 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:27.481 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.740 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1510802 00:28:27.740 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1510802 /var/tmp/bdevperf.sock 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1510802 ']' 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:27.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.741 { 00:28:27.741 "params": { 00:28:27.741 "name": "Nvme$subsystem", 00:28:27.741 "trtype": "$TEST_TRANSPORT", 00:28:27.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.741 "adrfam": "ipv4", 00:28:27.741 "trsvcid": "$NVMF_PORT", 00:28:27.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.741 "hdgst": ${hdgst:-false}, 00:28:27.741 "ddgst": ${ddgst:-false} 00:28:27.741 }, 00:28:27.741 "method": "bdev_nvme_attach_controller" 00:28:27.741 } 00:28:27.741 EOF 00:28:27.741 )") 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.741 { 00:28:27.741 "params": { 00:28:27.741 "name": "Nvme$subsystem", 00:28:27.741 "trtype": "$TEST_TRANSPORT", 00:28:27.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.741 "adrfam": "ipv4", 00:28:27.741 "trsvcid": "$NVMF_PORT", 00:28:27.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.741 "hdgst": ${hdgst:-false}, 00:28:27.741 "ddgst": ${ddgst:-false} 00:28:27.741 }, 00:28:27.741 "method": "bdev_nvme_attach_controller" 00:28:27.741 } 00:28:27.741 EOF 00:28:27.741 )") 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.741 { 00:28:27.741 "params": { 00:28:27.741 "name": "Nvme$subsystem", 00:28:27.741 "trtype": "$TEST_TRANSPORT", 00:28:27.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.741 "adrfam": "ipv4", 00:28:27.741 "trsvcid": "$NVMF_PORT", 00:28:27.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.741 "hdgst": ${hdgst:-false}, 00:28:27.741 "ddgst": ${ddgst:-false} 00:28:27.741 }, 00:28:27.741 "method": "bdev_nvme_attach_controller" 00:28:27.741 } 00:28:27.741 EOF 00:28:27.741 )") 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.741 { 00:28:27.741 "params": { 00:28:27.741 "name": "Nvme$subsystem", 00:28:27.741 "trtype": "$TEST_TRANSPORT", 00:28:27.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.741 "adrfam": "ipv4", 00:28:27.741 "trsvcid": "$NVMF_PORT", 00:28:27.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.741 "hdgst": ${hdgst:-false}, 00:28:27.741 "ddgst": ${ddgst:-false} 00:28:27.741 }, 00:28:27.741 "method": "bdev_nvme_attach_controller" 00:28:27.741 } 00:28:27.741 EOF 00:28:27.741 )") 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.741 { 00:28:27.741 "params": { 00:28:27.741 "name": "Nvme$subsystem", 00:28:27.741 "trtype": "$TEST_TRANSPORT", 00:28:27.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.741 "adrfam": "ipv4", 00:28:27.741 "trsvcid": "$NVMF_PORT", 00:28:27.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.741 "hdgst": ${hdgst:-false}, 00:28:27.741 "ddgst": ${ddgst:-false} 00:28:27.741 }, 00:28:27.741 "method": "bdev_nvme_attach_controller" 00:28:27.741 } 00:28:27.741 EOF 00:28:27.741 )") 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.741 { 00:28:27.741 "params": { 00:28:27.741 "name": "Nvme$subsystem", 00:28:27.741 "trtype": "$TEST_TRANSPORT", 00:28:27.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.741 "adrfam": "ipv4", 00:28:27.741 "trsvcid": "$NVMF_PORT", 00:28:27.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.741 "hdgst": ${hdgst:-false}, 00:28:27.741 "ddgst": ${ddgst:-false} 00:28:27.741 }, 00:28:27.741 "method": "bdev_nvme_attach_controller" 00:28:27.741 } 00:28:27.741 EOF 00:28:27.741 )") 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.741 { 00:28:27.741 "params": { 00:28:27.741 "name": "Nvme$subsystem", 00:28:27.741 "trtype": "$TEST_TRANSPORT", 00:28:27.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.741 "adrfam": "ipv4", 00:28:27.741 "trsvcid": "$NVMF_PORT", 00:28:27.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.741 "hdgst": ${hdgst:-false}, 00:28:27.741 "ddgst": ${ddgst:-false} 00:28:27.741 }, 00:28:27.741 "method": "bdev_nvme_attach_controller" 00:28:27.741 } 00:28:27.741 EOF 00:28:27.741 )") 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:27.741 [2024-07-13 00:53:39.102477] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:27.741 [2024-07-13 00:53:39.102526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510802 ] 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.741 { 00:28:27.741 "params": { 00:28:27.741 "name": "Nvme$subsystem", 00:28:27.741 "trtype": "$TEST_TRANSPORT", 00:28:27.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.741 "adrfam": "ipv4", 00:28:27.741 "trsvcid": "$NVMF_PORT", 00:28:27.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.741 "hdgst": ${hdgst:-false}, 00:28:27.741 "ddgst": ${ddgst:-false} 00:28:27.741 }, 00:28:27.741 "method": "bdev_nvme_attach_controller" 00:28:27.741 } 00:28:27.741 EOF 00:28:27.741 )") 00:28:27.741 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:27.742 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.742 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.742 { 00:28:27.742 "params": { 00:28:27.742 "name": "Nvme$subsystem", 00:28:27.742 "trtype": "$TEST_TRANSPORT", 00:28:27.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.742 "adrfam": "ipv4", 00:28:27.742 "trsvcid": "$NVMF_PORT", 00:28:27.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.742 "hdgst": ${hdgst:-false}, 00:28:27.742 "ddgst": ${ddgst:-false} 00:28:27.742 }, 00:28:27.742 "method": "bdev_nvme_attach_controller" 00:28:27.742 } 00:28:27.742 EOF 00:28:27.742 )") 00:28:27.742 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:27.742 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:27.742 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:27.742 { 00:28:27.742 "params": { 00:28:27.742 "name": "Nvme$subsystem", 00:28:27.742 "trtype": "$TEST_TRANSPORT", 00:28:27.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.742 "adrfam": "ipv4", 00:28:27.742 "trsvcid": "$NVMF_PORT", 00:28:27.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.742 "hdgst": ${hdgst:-false}, 00:28:27.742 "ddgst": ${ddgst:-false} 00:28:27.742 }, 00:28:27.742 "method": "bdev_nvme_attach_controller" 00:28:27.742 } 00:28:27.742 EOF 00:28:27.742 )") 00:28:27.742 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:27.742 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:27.742 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.742 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:27.742 00:53:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:27.742 "params": { 00:28:27.742 "name": "Nvme1", 00:28:27.742 "trtype": "tcp", 00:28:27.742 "traddr": "10.0.0.2", 00:28:27.742 "adrfam": "ipv4", 00:28:27.742 "trsvcid": "4420", 00:28:27.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:27.742 "hdgst": false, 00:28:27.742 "ddgst": false 00:28:27.742 }, 00:28:27.742 "method": "bdev_nvme_attach_controller" 00:28:27.742 },{ 00:28:27.742 "params": { 00:28:27.742 "name": "Nvme2", 00:28:27.742 "trtype": "tcp", 00:28:27.742 "traddr": "10.0.0.2", 00:28:27.742 "adrfam": "ipv4", 00:28:27.742 "trsvcid": "4420", 00:28:27.742 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:27.742 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:27.742 "hdgst": false, 00:28:27.742 "ddgst": false 00:28:27.742 }, 00:28:27.742 "method": "bdev_nvme_attach_controller" 00:28:27.742 },{ 00:28:27.742 "params": { 00:28:27.742 "name": "Nvme3", 00:28:27.742 "trtype": "tcp", 00:28:27.742 "traddr": "10.0.0.2", 00:28:27.742 "adrfam": "ipv4", 00:28:27.742 "trsvcid": "4420", 00:28:27.742 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:27.742 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:27.742 "hdgst": false, 00:28:27.742 "ddgst": false 00:28:27.742 }, 00:28:27.742 "method": "bdev_nvme_attach_controller" 00:28:27.742 },{ 00:28:27.742 "params": { 00:28:27.742 "name": "Nvme4", 00:28:27.742 "trtype": "tcp", 00:28:27.742 "traddr": "10.0.0.2", 00:28:27.742 "adrfam": "ipv4", 00:28:27.742 "trsvcid": "4420", 00:28:27.742 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:27.742 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:27.742 "hdgst": false, 00:28:27.742 "ddgst": false 00:28:27.742 }, 00:28:27.742 "method": "bdev_nvme_attach_controller" 00:28:27.742 },{ 00:28:27.742 "params": { 00:28:27.742 "name": "Nvme5", 00:28:27.742 "trtype": "tcp", 00:28:27.742 "traddr": "10.0.0.2", 00:28:27.742 "adrfam": "ipv4", 00:28:27.742 "trsvcid": "4420", 00:28:27.742 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:27.742 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:27.742 "hdgst": false, 00:28:27.742 "ddgst": false 00:28:27.742 }, 00:28:27.742 "method": "bdev_nvme_attach_controller" 00:28:27.742 },{ 00:28:27.742 "params": { 00:28:27.742 "name": "Nvme6", 00:28:27.742 "trtype": "tcp", 00:28:27.742 "traddr": "10.0.0.2", 00:28:27.742 "adrfam": "ipv4", 00:28:27.742 "trsvcid": "4420", 00:28:27.742 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:27.742 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:27.742 "hdgst": false, 00:28:27.742 "ddgst": false 00:28:27.742 }, 00:28:27.742 "method": "bdev_nvme_attach_controller" 00:28:27.742 },{ 00:28:27.742 "params": { 00:28:27.742 "name": "Nvme7", 00:28:27.742 "trtype": "tcp", 00:28:27.742 "traddr": "10.0.0.2", 00:28:27.742 "adrfam": "ipv4", 00:28:27.742 "trsvcid": "4420", 00:28:27.742 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:27.742 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:27.742 "hdgst": false, 00:28:27.742 "ddgst": false 00:28:27.742 }, 00:28:27.742 "method": "bdev_nvme_attach_controller" 00:28:27.742 },{ 00:28:27.742 "params": { 00:28:27.742 "name": "Nvme8", 00:28:27.742 "trtype": "tcp", 00:28:27.742 "traddr": "10.0.0.2", 00:28:27.742 "adrfam": "ipv4", 00:28:27.742 "trsvcid": "4420", 00:28:27.742 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:27.742 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:27.742 "hdgst": false, 00:28:27.742 "ddgst": false 00:28:27.742 }, 00:28:27.742 "method": "bdev_nvme_attach_controller" 00:28:27.742 },{ 00:28:27.742 "params": { 00:28:27.742 "name": "Nvme9", 00:28:27.742 "trtype": "tcp", 00:28:27.742 "traddr": "10.0.0.2", 00:28:27.742 "adrfam": "ipv4", 00:28:27.742 "trsvcid": "4420", 00:28:27.742 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:27.742 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:27.742 "hdgst": false, 00:28:27.742 "ddgst": false 00:28:27.742 }, 00:28:27.742 "method": "bdev_nvme_attach_controller" 00:28:27.742 },{ 00:28:27.742 "params": { 00:28:27.742 "name": "Nvme10", 00:28:27.742 "trtype": "tcp", 00:28:27.742 "traddr": "10.0.0.2", 00:28:27.742 "adrfam": "ipv4", 00:28:27.742 "trsvcid": "4420", 00:28:27.742 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:27.742 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:27.742 "hdgst": false, 00:28:27.742 "ddgst": false 00:28:27.742 }, 00:28:27.742 "method": "bdev_nvme_attach_controller" 00:28:27.742 }' 00:28:27.742 [2024-07-13 00:53:39.173420] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.742 [2024-07-13 00:53:39.213667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.169 Running I/O for 10 seconds... 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.169 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.461 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:29.461 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:29.461 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:29.461 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:29.461 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:29.461 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:29.461 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:29.461 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.461 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.461 00:53:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.461 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:29.461 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:29.461 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:29.720 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:29.720 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:29.720 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:29.720 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.720 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:29.720 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1510802 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1510802 ']' 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1510802 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1510802 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1510802' 00:28:29.980 killing process with pid 1510802 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1510802 00:28:29.980 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1510802 00:28:29.980 Received shutdown signal, test time was about 0.902878 seconds 00:28:29.980 00:28:29.980 Latency(us) 00:28:29.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.980 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.980 Verification LBA range: start 0x0 length 0x400 00:28:29.980 Nvme1n1 : 0.89 287.61 17.98 0.00 0.00 220102.12 25758.50 214274.23 00:28:29.980 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.980 Verification LBA range: start 0x0 length 0x400 00:28:29.980 Nvme2n1 : 0.89 294.09 18.38 0.00 0.00 210070.29 5670.29 216097.84 00:28:29.980 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.980 Verification LBA range: start 0x0 length 0x400 00:28:29.980 Nvme3n1 : 0.88 291.27 18.20 0.00 0.00 209389.30 24960.67 200597.15 00:28:29.980 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.980 Verification LBA range: start 0x0 length 0x400 00:28:29.980 Nvme4n1 : 0.88 290.06 18.13 0.00 0.00 206124.74 15614.66 217921.45 00:28:29.980 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.980 Verification LBA range: start 0x0 length 0x400 00:28:29.980 Nvme5n1 : 0.87 219.50 13.72 0.00 0.00 265706.63 18122.13 228863.11 00:28:29.980 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.980 Verification LBA range: start 0x0 length 0x400 00:28:29.980 Nvme6n1 : 0.90 291.23 18.20 0.00 0.00 197557.93 2877.89 213362.42 00:28:29.980 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.980 Verification LBA range: start 0x0 length 0x400 00:28:29.980 Nvme7n1 : 0.89 291.02 18.19 0.00 0.00 193044.07 5527.82 203332.56 00:28:29.980 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.980 Verification LBA range: start 0x0 length 0x400 00:28:29.980 Nvme8n1 : 0.90 284.59 17.79 0.00 0.00 194727.18 14588.88 207891.59 00:28:29.980 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.980 Verification LBA range: start 0x0 length 0x400 00:28:29.980 Nvme9n1 : 0.90 283.75 17.73 0.00 0.00 191455.72 16184.54 222480.47 00:28:29.980 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:29.980 Verification LBA range: start 0x0 length 0x400 00:28:29.980 Nvme10n1 : 0.87 219.98 13.75 0.00 0.00 240118.21 18578.03 242540.19 00:28:29.980 =================================================================================================================== 00:28:29.980 Total : 2753.12 172.07 0.00 0.00 210662.62 2877.89 242540.19 00:28:30.239 00:53:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1510538 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:31.177 rmmod nvme_tcp 00:28:31.177 rmmod nvme_fabrics 00:28:31.177 rmmod nvme_keyring 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1510538 ']' 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1510538 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1510538 ']' 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1510538 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1510538 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1510538' 00:28:31.177 killing process with pid 1510538 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1510538 00:28:31.177 00:53:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1510538 00:28:31.743 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:31.743 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:31.743 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:31.743 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:31.743 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:31.744 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.744 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.744 00:53:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.648 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:33.648 00:28:33.648 real 0m7.807s 00:28:33.648 user 0m23.461s 00:28:33.648 sys 0m1.335s 00:28:33.648 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:33.648 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.648 ************************************ 00:28:33.648 END TEST nvmf_shutdown_tc2 00:28:33.648 ************************************ 00:28:33.648 00:53:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:33.648 00:53:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:33.648 00:53:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:33.648 00:53:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.649 00:53:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:33.908 ************************************ 00:28:33.908 START TEST nvmf_shutdown_tc3 00:28:33.908 ************************************ 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:33.908 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:33.909 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:33.909 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:33.909 Found net devices under 0000:86:00.0: cvl_0_0 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:33.909 Found net devices under 0000:86:00.1: cvl_0_1 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:33.909 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:34.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:28:34.169 00:28:34.169 --- 10.0.0.2 ping statistics --- 00:28:34.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.169 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:28:34.169 00:28:34.169 --- 10.0.0.1 ping statistics --- 00:28:34.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.169 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1512046 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1512046 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1512046 ']' 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:34.169 00:53:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:34.169 [2024-07-13 00:53:45.591741] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:34.169 [2024-07-13 00:53:45.591779] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.169 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.169 [2024-07-13 00:53:45.661760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.169 [2024-07-13 00:53:45.704117] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.169 [2024-07-13 00:53:45.704160] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.169 [2024-07-13 00:53:45.704167] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.169 [2024-07-13 00:53:45.704173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.169 [2024-07-13 00:53:45.704178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.169 [2024-07-13 00:53:45.704235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.169 [2024-07-13 00:53:45.704318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.169 [2024-07-13 00:53:45.704439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.169 [2024-07-13 00:53:45.704441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.107 [2024-07-13 00:53:46.441337] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.107 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.108 Malloc1 00:28:35.108 [2024-07-13 00:53:46.537340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.108 Malloc2 00:28:35.108 Malloc3 00:28:35.108 Malloc4 00:28:35.367 Malloc5 00:28:35.367 Malloc6 00:28:35.367 Malloc7 00:28:35.367 Malloc8 00:28:35.367 Malloc9 00:28:35.367 Malloc10 00:28:35.367 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.367 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:35.367 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:35.367 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1512323 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1512323 /var/tmp/bdevperf.sock 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1512323 ']' 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:35.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:35.628 { 00:28:35.628 "params": { 00:28:35.628 "name": "Nvme$subsystem", 00:28:35.628 "trtype": "$TEST_TRANSPORT", 00:28:35.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.628 "adrfam": "ipv4", 00:28:35.628 "trsvcid": "$NVMF_PORT", 00:28:35.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.628 "hdgst": ${hdgst:-false}, 00:28:35.628 "ddgst": ${ddgst:-false} 00:28:35.628 }, 00:28:35.628 "method": "bdev_nvme_attach_controller" 00:28:35.628 } 00:28:35.628 EOF 00:28:35.628 )") 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:35.628 { 00:28:35.628 "params": { 00:28:35.628 "name": "Nvme$subsystem", 00:28:35.628 "trtype": "$TEST_TRANSPORT", 00:28:35.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.628 "adrfam": "ipv4", 00:28:35.628 "trsvcid": "$NVMF_PORT", 00:28:35.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.628 "hdgst": ${hdgst:-false}, 00:28:35.628 "ddgst": ${ddgst:-false} 00:28:35.628 }, 00:28:35.628 "method": "bdev_nvme_attach_controller" 00:28:35.628 } 00:28:35.628 EOF 00:28:35.628 )") 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:35.628 { 00:28:35.628 "params": { 00:28:35.628 "name": "Nvme$subsystem", 00:28:35.628 "trtype": "$TEST_TRANSPORT", 00:28:35.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.628 "adrfam": "ipv4", 00:28:35.628 "trsvcid": "$NVMF_PORT", 00:28:35.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.628 "hdgst": ${hdgst:-false}, 00:28:35.628 "ddgst": ${ddgst:-false} 00:28:35.628 }, 00:28:35.628 "method": "bdev_nvme_attach_controller" 00:28:35.628 } 00:28:35.628 EOF 00:28:35.628 )") 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:35.628 { 00:28:35.628 "params": { 00:28:35.628 "name": "Nvme$subsystem", 00:28:35.628 "trtype": "$TEST_TRANSPORT", 00:28:35.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.628 "adrfam": "ipv4", 00:28:35.628 "trsvcid": "$NVMF_PORT", 00:28:35.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.628 "hdgst": ${hdgst:-false}, 00:28:35.628 "ddgst": ${ddgst:-false} 00:28:35.628 }, 00:28:35.628 "method": "bdev_nvme_attach_controller" 00:28:35.628 } 00:28:35.628 EOF 00:28:35.628 )") 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:35.628 { 00:28:35.628 "params": { 00:28:35.628 "name": "Nvme$subsystem", 00:28:35.628 "trtype": "$TEST_TRANSPORT", 00:28:35.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.628 "adrfam": "ipv4", 00:28:35.628 "trsvcid": "$NVMF_PORT", 00:28:35.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.628 "hdgst": ${hdgst:-false}, 00:28:35.628 "ddgst": ${ddgst:-false} 00:28:35.628 }, 00:28:35.628 "method": "bdev_nvme_attach_controller" 00:28:35.628 } 00:28:35.628 EOF 00:28:35.628 )") 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:35.628 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:35.629 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:35.629 { 00:28:35.629 "params": { 00:28:35.629 "name": "Nvme$subsystem", 00:28:35.629 "trtype": "$TEST_TRANSPORT", 00:28:35.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.629 "adrfam": "ipv4", 00:28:35.629 "trsvcid": "$NVMF_PORT", 00:28:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.629 "hdgst": ${hdgst:-false}, 00:28:35.629 "ddgst": ${ddgst:-false} 00:28:35.629 }, 00:28:35.629 "method": "bdev_nvme_attach_controller" 00:28:35.629 } 00:28:35.629 EOF 00:28:35.629 )") 00:28:35.629 00:53:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:35.629 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:35.629 [2024-07-13 00:53:47.003400] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:35.629 [2024-07-13 00:53:47.003448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512323 ] 00:28:35.629 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:35.629 { 00:28:35.629 "params": { 00:28:35.629 "name": "Nvme$subsystem", 00:28:35.629 "trtype": "$TEST_TRANSPORT", 00:28:35.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.629 "adrfam": "ipv4", 00:28:35.629 "trsvcid": "$NVMF_PORT", 00:28:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.629 "hdgst": ${hdgst:-false}, 00:28:35.629 "ddgst": ${ddgst:-false} 00:28:35.629 }, 00:28:35.629 "method": "bdev_nvme_attach_controller" 00:28:35.629 } 00:28:35.629 EOF 00:28:35.629 )") 00:28:35.629 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:35.629 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:35.629 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:35.629 { 00:28:35.629 "params": { 00:28:35.629 "name": "Nvme$subsystem", 00:28:35.629 "trtype": "$TEST_TRANSPORT", 00:28:35.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.629 "adrfam": "ipv4", 00:28:35.629 "trsvcid": "$NVMF_PORT", 00:28:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.629 "hdgst": ${hdgst:-false}, 00:28:35.629 "ddgst": ${ddgst:-false} 00:28:35.629 }, 00:28:35.629 "method": "bdev_nvme_attach_controller" 00:28:35.629 } 00:28:35.629 EOF 00:28:35.629 )") 00:28:35.629 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:35.629 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:35.629 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:35.629 { 00:28:35.629 "params": { 00:28:35.629 "name": "Nvme$subsystem", 00:28:35.629 "trtype": "$TEST_TRANSPORT", 00:28:35.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.629 "adrfam": "ipv4", 00:28:35.629 "trsvcid": "$NVMF_PORT", 00:28:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.629 "hdgst": ${hdgst:-false}, 00:28:35.629 "ddgst": ${ddgst:-false} 00:28:35.629 }, 00:28:35.629 "method": "bdev_nvme_attach_controller" 00:28:35.629 } 00:28:35.629 EOF 00:28:35.629 )") 00:28:35.629 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:35.629 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:35.629 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:35.629 { 00:28:35.629 "params": { 00:28:35.629 "name": "Nvme$subsystem", 00:28:35.629 "trtype": "$TEST_TRANSPORT", 00:28:35.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.629 "adrfam": "ipv4", 00:28:35.629 "trsvcid": "$NVMF_PORT", 00:28:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.629 "hdgst": ${hdgst:-false}, 00:28:35.629 "ddgst": ${ddgst:-false} 00:28:35.629 }, 00:28:35.629 "method": "bdev_nvme_attach_controller" 00:28:35.629 } 00:28:35.629 EOF 00:28:35.629 )") 00:28:35.629 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:35.629 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.629 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:35.629 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:35.629 00:53:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:35.629 "params": { 00:28:35.629 "name": "Nvme1", 00:28:35.629 "trtype": "tcp", 00:28:35.629 "traddr": "10.0.0.2", 00:28:35.629 "adrfam": "ipv4", 00:28:35.629 "trsvcid": "4420", 00:28:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:35.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:35.629 "hdgst": false, 00:28:35.629 "ddgst": false 00:28:35.629 }, 00:28:35.629 "method": "bdev_nvme_attach_controller" 00:28:35.629 },{ 00:28:35.629 "params": { 00:28:35.629 "name": "Nvme2", 00:28:35.629 "trtype": "tcp", 00:28:35.629 "traddr": "10.0.0.2", 00:28:35.629 "adrfam": "ipv4", 00:28:35.629 "trsvcid": "4420", 00:28:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:35.629 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:35.629 "hdgst": false, 00:28:35.629 "ddgst": false 00:28:35.629 }, 00:28:35.629 "method": "bdev_nvme_attach_controller" 00:28:35.629 },{ 00:28:35.629 "params": { 00:28:35.629 "name": "Nvme3", 00:28:35.629 "trtype": "tcp", 00:28:35.629 "traddr": "10.0.0.2", 00:28:35.629 "adrfam": "ipv4", 00:28:35.629 "trsvcid": "4420", 00:28:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:35.629 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:35.629 "hdgst": false, 00:28:35.629 "ddgst": false 00:28:35.629 }, 00:28:35.629 "method": "bdev_nvme_attach_controller" 00:28:35.629 },{ 00:28:35.629 "params": { 00:28:35.629 "name": "Nvme4", 00:28:35.629 "trtype": "tcp", 00:28:35.629 "traddr": "10.0.0.2", 00:28:35.629 "adrfam": "ipv4", 00:28:35.629 "trsvcid": "4420", 00:28:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:35.629 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:35.629 "hdgst": false, 00:28:35.629 "ddgst": false 00:28:35.629 }, 00:28:35.629 "method": "bdev_nvme_attach_controller" 00:28:35.629 },{ 00:28:35.629 "params": { 00:28:35.629 "name": "Nvme5", 00:28:35.629 "trtype": "tcp", 00:28:35.629 "traddr": "10.0.0.2", 00:28:35.629 "adrfam": "ipv4", 00:28:35.629 "trsvcid": "4420", 00:28:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:35.629 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:35.629 "hdgst": false, 00:28:35.629 "ddgst": false 00:28:35.629 }, 00:28:35.629 "method": "bdev_nvme_attach_controller" 00:28:35.629 },{ 00:28:35.629 "params": { 00:28:35.629 "name": "Nvme6", 00:28:35.629 "trtype": "tcp", 00:28:35.629 "traddr": "10.0.0.2", 00:28:35.629 "adrfam": "ipv4", 00:28:35.629 "trsvcid": "4420", 00:28:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:35.629 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:35.629 "hdgst": false, 00:28:35.629 "ddgst": false 00:28:35.629 }, 00:28:35.629 "method": "bdev_nvme_attach_controller" 00:28:35.629 },{ 00:28:35.629 "params": { 00:28:35.629 "name": "Nvme7", 00:28:35.629 "trtype": "tcp", 00:28:35.629 "traddr": "10.0.0.2", 00:28:35.629 "adrfam": "ipv4", 00:28:35.629 "trsvcid": "4420", 00:28:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:35.629 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:35.629 "hdgst": false, 00:28:35.629 "ddgst": false 00:28:35.629 }, 00:28:35.629 "method": "bdev_nvme_attach_controller" 00:28:35.629 },{ 00:28:35.629 "params": { 00:28:35.629 "name": "Nvme8", 00:28:35.629 "trtype": "tcp", 00:28:35.629 "traddr": "10.0.0.2", 00:28:35.629 "adrfam": "ipv4", 00:28:35.629 "trsvcid": "4420", 00:28:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:35.629 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:35.629 "hdgst": false, 00:28:35.629 "ddgst": false 00:28:35.629 }, 00:28:35.629 "method": "bdev_nvme_attach_controller" 00:28:35.629 },{ 00:28:35.629 "params": { 00:28:35.629 "name": "Nvme9", 00:28:35.629 "trtype": "tcp", 00:28:35.629 "traddr": "10.0.0.2", 00:28:35.629 "adrfam": "ipv4", 00:28:35.629 "trsvcid": "4420", 00:28:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:35.629 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:35.629 "hdgst": false, 00:28:35.629 "ddgst": false 00:28:35.629 }, 00:28:35.629 "method": "bdev_nvme_attach_controller" 00:28:35.629 },{ 00:28:35.629 "params": { 00:28:35.629 "name": "Nvme10", 00:28:35.629 "trtype": "tcp", 00:28:35.629 "traddr": "10.0.0.2", 00:28:35.629 "adrfam": "ipv4", 00:28:35.629 "trsvcid": "4420", 00:28:35.629 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:35.629 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:35.629 "hdgst": false, 00:28:35.629 "ddgst": false 00:28:35.629 }, 00:28:35.629 "method": "bdev_nvme_attach_controller" 00:28:35.629 }' 00:28:35.629 [2024-07-13 00:53:47.071448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.629 [2024-07-13 00:53:47.110710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.537 Running I/O for 10 seconds... 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=18 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 18 -ge 100 ']' 00:28:37.537 00:53:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:37.811 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:37.811 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:37.811 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:37.811 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:37.811 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.811 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1512046 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1512046 ']' 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1512046 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1512046 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1512046' 00:28:37.812 killing process with pid 1512046 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1512046 00:28:37.812 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1512046 00:28:37.812 [2024-07-13 00:53:49.246007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.246480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824530 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.247661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.247694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.247702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.247708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.247715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.247721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.247727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.247738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.247745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.247751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.247758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.812 [2024-07-13 00:53:49.247764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.247996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.248002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.248008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.248014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.248020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.248026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.248032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.248039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.248045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.248051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.248057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.248064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.248069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.248075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.248082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826f10 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.813 [2024-07-13 00:53:49.249530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.249669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8249d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.250406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.814 [2024-07-13 00:53:49.250437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.814 [2024-07-13 00:53:49.250447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.814 [2024-07-13 00:53:49.250454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.814 [2024-07-13 00:53:49.250462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.814 [2024-07-13 00:53:49.250468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.814 [2024-07-13 00:53:49.250476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.814 [2024-07-13 00:53:49.250484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.814 [2024-07-13 00:53:49.250490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a779d0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.250533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.814 [2024-07-13 00:53:49.250542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.814 [2024-07-13 00:53:49.250550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.814 [2024-07-13 00:53:49.250557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.814 [2024-07-13 00:53:49.250565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.814 [2024-07-13 00:53:49.250571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.814 [2024-07-13 00:53:49.250578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.814 [2024-07-13 00:53:49.250585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.814 [2024-07-13 00:53:49.250598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c26b0 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.814 [2024-07-13 00:53:49.251500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.251506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.251511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.251517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.251525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.251531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.251537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.251542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.251548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.251554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.251560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.251566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.251572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.251579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824e90 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.252999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.253266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825330 is same with the state(5) to be set 00:28:37.815 [2024-07-13 00:53:49.254758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.815 [2024-07-13 00:53:49.254782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.815 [2024-07-13 00:53:49.254796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.815 [2024-07-13 00:53:49.254804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.815 [2024-07-13 00:53:49.254813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.815 [2024-07-13 00:53:49.254820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.815 [2024-07-13 00:53:49.254828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.815 [2024-07-13 00:53:49.254835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.815 [2024-07-13 00:53:49.254847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.815 [2024-07-13 00:53:49.254854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.815 [2024-07-13 00:53:49.254862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.815 [2024-07-13 00:53:49.254869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.815 [2024-07-13 00:53:49.254878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.815 [2024-07-13 00:53:49.254884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.254892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.254900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.254909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.254916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.254924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.254930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.254939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.254945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.254954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.254960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.254968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.254975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.254984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.254990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.254998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.816 [2024-07-13 00:53:49.255383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.816 [2024-07-13 00:53:49.255390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128[2024-07-13 00:53:49.255566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 he state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 00:53:49.255576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 he state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with t[2024-07-13 00:53:49.255662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:28:37.817 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with t[2024-07-13 00:53:49.255724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12he state(5) to be set 00:28:37.817 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 [2024-07-13 00:53:49.255745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 [2024-07-13 00:53:49.255752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:12[2024-07-13 00:53:49.255759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.817 he state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 00:53:49.255767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.817 he state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.817 [2024-07-13 00:53:49.255816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255835] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18bc130 was disconnected and freed. reset controller. 00:28:37.818 [2024-07-13 00:53:49.255840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.255896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 00:53:49.255903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 he state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:1[2024-07-13 00:53:49.255917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 he state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with t[2024-07-13 00:53:49.255927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:28:37.818 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.255936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.255943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.255950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with the state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:1[2024-07-13 00:53:49.255957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 he state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 00:53:49.255966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826110 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 he state(5) to be set 00:28:37.818 [2024-07-13 00:53:49.255976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.255983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.255992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.255998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.818 [2024-07-13 00:53:49.256387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.818 [2024-07-13 00:53:49.256393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:12[2024-07-13 00:53:49.256833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 he state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 00:53:49.256844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 he state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.819 [2024-07-13 00:53:49.256873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.819 [2024-07-13 00:53:49.256880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with t[2024-07-13 00:53:49.256933] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18bd5a0 was disconnected and frhe state(5) to be set 00:28:37.819 eed. reset controller. 00:28:37.819 [2024-07-13 00:53:49.256944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.819 [2024-07-13 00:53:49.256951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.256956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.256962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.256970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.256976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.256981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.256987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.256993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.256998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with t[2024-07-13 00:53:49.257119] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:37.820 he state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257185] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:37.820 [2024-07-13 00:53:49.257188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257236] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:37.820 [2024-07-13 00:53:49.257256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8265d0 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.257999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.820 [2024-07-13 00:53:49.258589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.258621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.258653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.258684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.258716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.258750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.258782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.258813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.258846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.258879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.258910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.258945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.258977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controlle[2024-07-13 00:53:49.259330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with tr 00:28:37.821 he state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controlle[2024-07-13 00:53:49.259424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with tr 00:28:37.821 he state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ba610 (9): Bad file descriptor 00:28:37.821 [2024-07-13 00:53:49.259522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f2c30 (9): Bad file descriptor 00:28:37.821 [2024-07-13 00:53:49.259584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.259682] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:37.821 [2024-07-13 00:53:49.260673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.821 [2024-07-13 00:53:49.260694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f2c30 with addr=10.0.0.2, port=4420 00:28:37.821 [2024-07-13 00:53:49.260703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c30 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.260910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.821 [2024-07-13 00:53:49.260920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ba610 with addr=10.0.0.2, port=4420 00:28:37.821 [2024-07-13 00:53:49.260928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ba610 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.260950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.260959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.260967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.260973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.260983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.260990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.260997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.269618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.269634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a84a20 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.269674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.269686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.269696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.269705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.269715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.269724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.269734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.269743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.269757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a608c0 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.269789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.269801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.269810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.269820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.269829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.269838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.269847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.269856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.269865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e210 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.269891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.269902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.269913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.269922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.269933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.269943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.269957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.269967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.269978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d13a0 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.270007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.270018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.270027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.270036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.270045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.270054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.270071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.270080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.270088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60b10 is same with the state(5) to be set 00:28:37.821 [2024-07-13 00:53:49.270111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a779d0 (9): Bad file descriptor 00:28:37.821 [2024-07-13 00:53:49.270145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.821 [2024-07-13 00:53:49.270155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.821 [2024-07-13 00:53:49.270166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.822 [2024-07-13 00:53:49.270176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.270186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.822 [2024-07-13 00:53:49.270194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.270204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.822 [2024-07-13 00:53:49.270213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.270230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aed70 is same with the state(5) to be set 00:28:37.822 [2024-07-13 00:53:49.270250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c26b0 (9): Bad file descriptor 00:28:37.822 [2024-07-13 00:53:49.270344] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:37.822 [2024-07-13 00:53:49.270420] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:37.822 [2024-07-13 00:53:49.270561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f2c30 (9): Bad file descriptor 00:28:37.822 [2024-07-13 00:53:49.270577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ba610 (9): Bad file descriptor 00:28:37.822 [2024-07-13 00:53:49.270603] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:37.822 [2024-07-13 00:53:49.270618] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:37.822 [2024-07-13 00:53:49.270726] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:37.822 [2024-07-13 00:53:49.270799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:37.822 [2024-07-13 00:53:49.270810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:37.822 [2024-07-13 00:53:49.270820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:37.822 [2024-07-13 00:53:49.270836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:37.822 [2024-07-13 00:53:49.270844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:37.822 [2024-07-13 00:53:49.270853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:37.822 [2024-07-13 00:53:49.270871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a84a20 (9): Bad file descriptor 00:28:37.822 [2024-07-13 00:53:49.270891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a608c0 (9): Bad file descriptor 00:28:37.822 [2024-07-13 00:53:49.270915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8e210 (9): Bad file descriptor 00:28:37.822 [2024-07-13 00:53:49.270933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d13a0 (9): Bad file descriptor 00:28:37.822 [2024-07-13 00:53:49.270951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a60b10 (9): Bad file descriptor 00:28:37.822 [2024-07-13 00:53:49.270977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14aed70 (9): Bad file descriptor 00:28:37.822 [2024-07-13 00:53:49.271115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.822 [2024-07-13 00:53:49.271128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.822 [2024-07-13 00:53:49.271188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.822 [2024-07-13 00:53:49.271878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.822 [2024-07-13 00:53:49.271889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.271900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.271912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.271921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.271935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.271945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.271957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.271966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.271977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.271986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.271999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.272610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.272620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196b0c0 is same with the state(5) to be set 00:28:37.823 [2024-07-13 00:53:49.274028] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:37.823 [2024-07-13 00:53:49.274094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.274106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.274120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.274130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.274141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.274150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.274162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.274171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.274183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.274192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.274203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.274212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.274230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.274240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.274251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.823 [2024-07-13 00:53:49.274260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.823 [2024-07-13 00:53:49.274275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.274981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.274990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.824 [2024-07-13 00:53:49.275001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.824 [2024-07-13 00:53:49.275009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.275398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.275408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32670 is same with the state(5) to be set 00:28:37.825 [2024-07-13 00:53:49.280050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:37.825 [2024-07-13 00:53:49.280079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:37.825 [2024-07-13 00:53:49.280373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.825 [2024-07-13 00:53:49.280391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c26b0 with addr=10.0.0.2, port=4420 00:28:37.825 [2024-07-13 00:53:49.280401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c26b0 is same with the state(5) to be set 00:28:37.825 [2024-07-13 00:53:49.280501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.825 [2024-07-13 00:53:49.280514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a779d0 with addr=10.0.0.2, port=4420 00:28:37.825 [2024-07-13 00:53:49.280523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a779d0 is same with the state(5) to be set 00:28:37.825 [2024-07-13 00:53:49.281101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c26b0 (9): Bad file descriptor 00:28:37.825 [2024-07-13 00:53:49.281118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a779d0 (9): Bad file descriptor 00:28:37.825 [2024-07-13 00:53:49.281262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:37.825 [2024-07-13 00:53:49.281275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:37.825 [2024-07-13 00:53:49.281284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:37.825 [2024-07-13 00:53:49.281298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:37.825 [2024-07-13 00:53:49.281311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:37.825 [2024-07-13 00:53:49.281319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:37.825 [2024-07-13 00:53:49.281381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.825 [2024-07-13 00:53:49.281691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.825 [2024-07-13 00:53:49.281702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.281721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.281740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.281759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.281778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.281798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.281816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.281836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.281855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.281876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.281895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.281915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.281934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.281953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.281972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.281991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.281999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.826 [2024-07-13 00:53:49.282532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.826 [2024-07-13 00:53:49.282542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.282551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.282561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.282570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.282580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.282589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.282601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.282610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.282621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.282629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.282638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196c340 is same with the state(5) to be set 00:28:37.827 [2024-07-13 00:53:49.283935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.283951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.283965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.283974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.283985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.283994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.827 [2024-07-13 00:53:49.284659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.827 [2024-07-13 00:53:49.284669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.284987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.284997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.285006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.285016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.285025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.285035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.285044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.285055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.285063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.285074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.285082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.285093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.285101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.285114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.285122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.285133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.285141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.285152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.285160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.285171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.285179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.285188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196d7d0 is same with the state(5) to be set 00:28:37.828 [2024-07-13 00:53:49.286481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.286498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.286511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.286519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.286531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.286540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.286552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.286561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.286572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.286580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.286591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.286600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.286610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.286619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.286629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.286638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.286652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.828 [2024-07-13 00:53:49.286660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.828 [2024-07-13 00:53:49.286671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.286987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.286997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.829 [2024-07-13 00:53:49.287498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.829 [2024-07-13 00:53:49.287508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.287517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.287527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.287536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.287547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.287556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.287568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.287576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.287587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.287596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.287607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.287615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.287627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.287636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.287647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.287656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.287666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.287675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.287686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.287694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.287705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.287713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.287724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.287733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.287742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196ecf0 is same with the state(5) to be set 00:28:37.830 [2024-07-13 00:53:49.288938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.288953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.288964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.288972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.288982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.288989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.288999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.830 [2024-07-13 00:53:49.289457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.830 [2024-07-13 00:53:49.289464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.289992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.289999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.290009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.290016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.290025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18bea10 is same with the state(5) to be set 00:28:37.831 [2024-07-13 00:53:49.291159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.291174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.291185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.291193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.291203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.291211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.291221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.291232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.291242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.291252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.291262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.291269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.291278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.291285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.291294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.291302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.831 [2024-07-13 00:53:49.291311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.831 [2024-07-13 00:53:49.291318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.832 [2024-07-13 00:53:49.291910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.832 [2024-07-13 00:53:49.291917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.291926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.291934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.291943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.291950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.291959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.291966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.291976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.291984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.291993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.292000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.292010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.292017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.292026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.292033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.292042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.292050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.292059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.292066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.292076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.292085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.292094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.292101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.292111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.292118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.292127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.292135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.292144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.292152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.292161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.292168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.292177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.292184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.292194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.292201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.292210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.292218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.292231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.292239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.292247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2ff00 is same with the state(5) to be set 00:28:37.833 [2024-07-13 00:53:49.293361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.833 [2024-07-13 00:53:49.293746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.833 [2024-07-13 00:53:49.293755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.293763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.293772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.293780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.293789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.293796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.293805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.293812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.293821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.293828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.293839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.293847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.293856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.293863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.293872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.293880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.293889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.293896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.293906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.293913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.293922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.293929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.293939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.293946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.293955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.293962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.293971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.293979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.293988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.293995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.834 [2024-07-13 00:53:49.294433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.834 [2024-07-13 00:53:49.294441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a311b0 is same with the state(5) to be set 00:28:37.834 [2024-07-13 00:53:49.300929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:37.834 [2024-07-13 00:53:49.300955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:37.834 [2024-07-13 00:53:49.300970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.834 [2024-07-13 00:53:49.300977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.834 [2024-07-13 00:53:49.300984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:37.835 [2024-07-13 00:53:49.300993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:37.835 [2024-07-13 00:53:49.301060] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:37.835 [2024-07-13 00:53:49.301073] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:37.835 [2024-07-13 00:53:49.301084] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:37.835 [2024-07-13 00:53:49.301093] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:37.835 [2024-07-13 00:53:49.301155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:37.835 [2024-07-13 00:53:49.301164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:37.835 [2024-07-13 00:53:49.301172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:37.835 task offset: 18304 on job bdev=Nvme5n1 fails 00:28:37.835 00:28:37.835 Latency(us) 00:28:37.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.835 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.835 Job: Nvme1n1 ended in about 0.64 seconds with error 00:28:37.835 Verification LBA range: start 0x0 length 0x400 00:28:37.835 Nvme1n1 : 0.64 199.00 12.44 99.50 0.00 211480.34 22909.11 206067.98 00:28:37.835 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.835 Job: Nvme2n1 ended in about 0.65 seconds with error 00:28:37.835 Verification LBA range: start 0x0 length 0x400 00:28:37.835 Nvme2n1 : 0.65 195.96 12.25 97.98 0.00 209546.61 27582.11 208803.39 00:28:37.835 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.835 Job: Nvme3n1 ended in about 0.66 seconds with error 00:28:37.835 Verification LBA range: start 0x0 length 0x400 00:28:37.835 Nvme3n1 : 0.66 195.20 12.20 97.60 0.00 205102.45 15158.76 214274.23 00:28:37.835 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.835 Job: Nvme4n1 ended in about 0.66 seconds with error 00:28:37.835 Verification LBA range: start 0x0 length 0x400 00:28:37.835 Nvme4n1 : 0.66 194.44 12.15 97.22 0.00 200665.41 23137.06 206067.98 00:28:37.835 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.835 Job: Nvme5n1 ended in about 0.63 seconds with error 00:28:37.835 Verification LBA range: start 0x0 length 0x400 00:28:37.835 Nvme5n1 : 0.63 203.85 12.74 101.93 0.00 185307.57 3590.23 217009.64 00:28:37.835 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.835 Job: Nvme6n1 ended in about 0.63 seconds with error 00:28:37.835 Verification LBA range: start 0x0 length 0x400 00:28:37.835 Nvme6n1 : 0.63 203.56 12.72 101.78 0.00 180392.00 4160.11 204244.37 00:28:37.835 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.835 Job: Nvme7n1 ended in about 0.66 seconds with error 00:28:37.835 Verification LBA range: start 0x0 length 0x400 00:28:37.835 Nvme7n1 : 0.66 193.78 12.11 96.89 0.00 185659.21 15272.74 214274.23 00:28:37.835 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.835 Job: Nvme8n1 ended in about 0.66 seconds with error 00:28:37.835 Verification LBA range: start 0x0 length 0x400 00:28:37.835 Nvme8n1 : 0.66 193.14 12.07 96.57 0.00 181169.64 24048.86 193302.71 00:28:37.835 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.835 Job: Nvme9n1 ended in about 0.66 seconds with error 00:28:37.835 Verification LBA range: start 0x0 length 0x400 00:28:37.835 Nvme9n1 : 0.66 96.25 6.02 96.25 0.00 265098.91 19147.91 248011.02 00:28:37.835 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:37.835 Job: Nvme10n1 ended in about 0.65 seconds with error 00:28:37.835 Verification LBA range: start 0x0 length 0x400 00:28:37.835 Nvme10n1 : 0.65 99.07 6.19 99.07 0.00 247637.93 19603.81 237069.36 00:28:37.835 =================================================================================================================== 00:28:37.835 Total : 1774.25 110.89 984.79 0.00 203694.41 3590.23 248011.02 00:28:37.835 [2024-07-13 00:53:49.327296] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:37.835 [2024-07-13 00:53:49.327336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:37.835 [2024-07-13 00:53:49.327526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.835 [2024-07-13 00:53:49.327542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ba610 with addr=10.0.0.2, port=4420 00:28:37.835 [2024-07-13 00:53:49.327552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ba610 is same with the state(5) to be set 00:28:37.835 [2024-07-13 00:53:49.327764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.835 [2024-07-13 00:53:49.327774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f2c30 with addr=10.0.0.2, port=4420 00:28:37.835 [2024-07-13 00:53:49.327781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c30 is same with the state(5) to be set 00:28:37.835 [2024-07-13 00:53:49.327947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.835 [2024-07-13 00:53:49.327957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a8e210 with addr=10.0.0.2, port=4420 00:28:37.835 [2024-07-13 00:53:49.327964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e210 is same with the state(5) to be set 00:28:37.835 [2024-07-13 00:53:49.328066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.835 [2024-07-13 00:53:49.328075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14aed70 with addr=10.0.0.2, port=4420 00:28:37.835 [2024-07-13 00:53:49.328082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aed70 is same with the state(5) to be set 00:28:37.835 [2024-07-13 00:53:49.329681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.835 [2024-07-13 00:53:49.329699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d13a0 with addr=10.0.0.2, port=4420 00:28:37.835 [2024-07-13 00:53:49.329707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d13a0 is same with the state(5) to be set 00:28:37.835 [2024-07-13 00:53:49.329914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.835 [2024-07-13 00:53:49.329924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a60b10 with addr=10.0.0.2, port=4420 00:28:37.835 [2024-07-13 00:53:49.329932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60b10 is same with the state(5) to be set 00:28:37.835 [2024-07-13 00:53:49.330148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.835 [2024-07-13 00:53:49.330158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a608c0 with addr=10.0.0.2, port=4420 00:28:37.835 [2024-07-13 00:53:49.330165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a608c0 is same with the state(5) to be set 00:28:37.835 [2024-07-13 00:53:49.330383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.835 [2024-07-13 00:53:49.330394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a84a20 with addr=10.0.0.2, port=4420 00:28:37.835 [2024-07-13 00:53:49.330407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a84a20 is same with the state(5) to be set 00:28:37.835 [2024-07-13 00:53:49.330420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ba610 (9): Bad file descriptor 00:28:37.835 [2024-07-13 00:53:49.330432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f2c30 (9): Bad file descriptor 00:28:37.835 [2024-07-13 00:53:49.330441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8e210 (9): Bad file descriptor 00:28:37.835 [2024-07-13 00:53:49.330450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14aed70 (9): Bad file descriptor 00:28:37.835 [2024-07-13 00:53:49.330478] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:37.835 [2024-07-13 00:53:49.330491] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:37.835 [2024-07-13 00:53:49.330504] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:37.835 [2024-07-13 00:53:49.330514] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:37.835 [2024-07-13 00:53:49.330524] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:37.835 [2024-07-13 00:53:49.330534] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:37.835 [2024-07-13 00:53:49.330826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:37.835 [2024-07-13 00:53:49.330840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:37.835 [2024-07-13 00:53:49.330878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d13a0 (9): Bad file descriptor 00:28:37.835 [2024-07-13 00:53:49.330888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a60b10 (9): Bad file descriptor 00:28:37.835 [2024-07-13 00:53:49.330897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a608c0 (9): Bad file descriptor 00:28:37.835 [2024-07-13 00:53:49.330905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a84a20 (9): Bad file descriptor 00:28:37.835 [2024-07-13 00:53:49.330913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:37.835 [2024-07-13 00:53:49.330920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:37.835 [2024-07-13 00:53:49.330927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:37.835 [2024-07-13 00:53:49.330938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:37.835 [2024-07-13 00:53:49.330945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:37.835 [2024-07-13 00:53:49.330951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:37.835 [2024-07-13 00:53:49.330960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:37.835 [2024-07-13 00:53:49.330966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:37.835 [2024-07-13 00:53:49.330973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:37.835 [2024-07-13 00:53:49.330982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:37.835 [2024-07-13 00:53:49.330988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:37.835 [2024-07-13 00:53:49.330995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:37.835 [2024-07-13 00:53:49.331067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.835 [2024-07-13 00:53:49.331075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.835 [2024-07-13 00:53:49.331081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.835 [2024-07-13 00:53:49.331087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.835 [2024-07-13 00:53:49.331340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.835 [2024-07-13 00:53:49.331354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a779d0 with addr=10.0.0.2, port=4420 00:28:37.835 [2024-07-13 00:53:49.331361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a779d0 is same with the state(5) to be set 00:28:37.836 [2024-07-13 00:53:49.331556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.836 [2024-07-13 00:53:49.331567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c26b0 with addr=10.0.0.2, port=4420 00:28:37.836 [2024-07-13 00:53:49.331573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c26b0 is same with the state(5) to be set 00:28:37.836 [2024-07-13 00:53:49.331580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:37.836 [2024-07-13 00:53:49.331586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:37.836 [2024-07-13 00:53:49.331593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:37.836 [2024-07-13 00:53:49.331602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:37.836 [2024-07-13 00:53:49.331608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:37.836 [2024-07-13 00:53:49.331615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:37.836 [2024-07-13 00:53:49.331623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:37.836 [2024-07-13 00:53:49.331629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:37.836 [2024-07-13 00:53:49.331636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:37.836 [2024-07-13 00:53:49.331644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:37.836 [2024-07-13 00:53:49.331650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:37.836 [2024-07-13 00:53:49.331656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:37.836 [2024-07-13 00:53:49.331683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.836 [2024-07-13 00:53:49.331690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.836 [2024-07-13 00:53:49.331696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.836 [2024-07-13 00:53:49.331702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.836 [2024-07-13 00:53:49.331710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a779d0 (9): Bad file descriptor 00:28:37.836 [2024-07-13 00:53:49.331719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c26b0 (9): Bad file descriptor 00:28:37.836 [2024-07-13 00:53:49.331744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:37.836 [2024-07-13 00:53:49.331751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:37.836 [2024-07-13 00:53:49.331758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:37.836 [2024-07-13 00:53:49.331769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:37.836 [2024-07-13 00:53:49.331775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:37.836 [2024-07-13 00:53:49.331781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:37.836 [2024-07-13 00:53:49.331805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.836 [2024-07-13 00:53:49.331812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.406 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:38.406 00:53:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1512323 00:28:39.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1512323) - No such process 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:39.344 rmmod nvme_tcp 00:28:39.344 rmmod nvme_fabrics 00:28:39.344 rmmod nvme_keyring 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:39.344 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:39.345 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:39.345 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:39.345 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:39.345 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:39.345 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.345 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:39.345 00:53:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.251 00:53:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:41.251 00:28:41.251 real 0m7.573s 00:28:41.251 user 0m18.223s 00:28:41.251 sys 0m1.219s 00:28:41.251 00:53:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:41.251 00:53:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:41.251 ************************************ 00:28:41.251 END TEST nvmf_shutdown_tc3 00:28:41.251 ************************************ 00:28:41.510 00:53:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:41.510 00:53:52 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:41.510 00:28:41.510 real 0m30.129s 00:28:41.510 user 1m12.671s 00:28:41.510 sys 0m8.436s 00:28:41.510 00:53:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:41.510 00:53:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:41.510 ************************************ 00:28:41.510 END TEST nvmf_shutdown 00:28:41.510 ************************************ 00:28:41.510 00:53:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:41.510 00:53:52 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:41.510 00:53:52 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:41.510 00:53:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:41.510 00:53:52 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:41.510 00:53:52 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:41.510 00:53:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:41.510 00:53:52 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:41.510 00:53:52 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:41.510 00:53:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:41.510 00:53:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:41.510 00:53:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:41.510 ************************************ 00:28:41.510 START TEST nvmf_multicontroller 00:28:41.510 ************************************ 00:28:41.510 00:53:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:41.510 * Looking for test storage... 00:28:41.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:41.510 00:53:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:41.769 00:53:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:47.044 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:47.044 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:47.044 Found net devices under 0000:86:00.0: cvl_0_0 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:47.044 Found net devices under 0000:86:00.1: cvl_0_1 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:47.044 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:47.045 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:47.045 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:47.045 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:47.045 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:47.045 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.045 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:47.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:28:47.304 00:28:47.304 --- 10.0.0.2 ping statistics --- 00:28:47.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.304 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:47.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:28:47.304 00:28:47.304 --- 10.0.0.1 ping statistics --- 00:28:47.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.304 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:47.304 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:47.563 00:53:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:47.563 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:47.563 00:53:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:47.563 00:53:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:47.563 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1516365 00:28:47.563 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:47.563 00:53:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1516365 00:28:47.563 00:53:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1516365 ']' 00:28:47.563 00:53:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.563 00:53:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:47.563 00:53:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.563 00:53:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:47.563 00:53:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:47.563 [2024-07-13 00:53:58.923590] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:47.563 [2024-07-13 00:53:58.923639] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.563 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.563 [2024-07-13 00:53:58.997894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:47.563 [2024-07-13 00:53:59.038180] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.563 [2024-07-13 00:53:59.038218] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.563 [2024-07-13 00:53:59.038229] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.563 [2024-07-13 00:53:59.038235] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.563 [2024-07-13 00:53:59.038240] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.563 [2024-07-13 00:53:59.038374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.563 [2024-07-13 00:53:59.038479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.563 [2024-07-13 00:53:59.038480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.499 [2024-07-13 00:53:59.768727] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.499 Malloc0 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.499 [2024-07-13 00:53:59.827989] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.499 [2024-07-13 00:53:59.835938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.499 Malloc1 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1516606 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1516606 /var/tmp/bdevperf.sock 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1516606 ']' 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:48.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:48.499 00:53:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.759 NVMe0n1 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.759 1 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.759 request: 00:28:48.759 { 00:28:48.759 "name": "NVMe0", 00:28:48.759 "trtype": "tcp", 00:28:48.759 "traddr": "10.0.0.2", 00:28:48.759 "adrfam": "ipv4", 00:28:48.759 "trsvcid": "4420", 00:28:48.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:48.759 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:48.759 "hostaddr": "10.0.0.2", 00:28:48.759 "hostsvcid": "60000", 00:28:48.759 "prchk_reftag": false, 00:28:48.759 "prchk_guard": false, 00:28:48.759 "hdgst": false, 00:28:48.759 "ddgst": false, 00:28:48.759 "method": "bdev_nvme_attach_controller", 00:28:48.759 "req_id": 1 00:28:48.759 } 00:28:48.759 Got JSON-RPC error response 00:28:48.759 response: 00:28:48.759 { 00:28:48.759 "code": -114, 00:28:48.759 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:48.759 } 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:48.759 request: 00:28:48.759 { 00:28:48.759 "name": "NVMe0", 00:28:48.759 "trtype": "tcp", 00:28:48.759 "traddr": "10.0.0.2", 00:28:48.759 "adrfam": "ipv4", 00:28:48.759 "trsvcid": "4420", 00:28:48.759 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:48.759 "hostaddr": "10.0.0.2", 00:28:48.759 "hostsvcid": "60000", 00:28:48.759 "prchk_reftag": false, 00:28:48.759 "prchk_guard": false, 00:28:48.759 "hdgst": false, 00:28:48.759 "ddgst": false, 00:28:48.759 "method": "bdev_nvme_attach_controller", 00:28:48.759 "req_id": 1 00:28:48.759 } 00:28:48.759 Got JSON-RPC error response 00:28:48.759 response: 00:28:48.759 { 00:28:48.759 "code": -114, 00:28:48.759 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:48.759 } 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.759 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:49.019 request: 00:28:49.019 { 00:28:49.019 "name": "NVMe0", 00:28:49.019 "trtype": "tcp", 00:28:49.019 "traddr": "10.0.0.2", 00:28:49.019 "adrfam": "ipv4", 00:28:49.019 "trsvcid": "4420", 00:28:49.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:49.019 "hostaddr": "10.0.0.2", 00:28:49.019 "hostsvcid": "60000", 00:28:49.019 "prchk_reftag": false, 00:28:49.019 "prchk_guard": false, 00:28:49.019 "hdgst": false, 00:28:49.019 "ddgst": false, 00:28:49.019 "multipath": "disable", 00:28:49.019 "method": "bdev_nvme_attach_controller", 00:28:49.019 "req_id": 1 00:28:49.019 } 00:28:49.019 Got JSON-RPC error response 00:28:49.019 response: 00:28:49.019 { 00:28:49.019 "code": -114, 00:28:49.019 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:49.019 } 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:49.019 request: 00:28:49.019 { 00:28:49.019 "name": "NVMe0", 00:28:49.019 "trtype": "tcp", 00:28:49.019 "traddr": "10.0.0.2", 00:28:49.019 "adrfam": "ipv4", 00:28:49.019 "trsvcid": "4420", 00:28:49.019 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:49.019 "hostaddr": "10.0.0.2", 00:28:49.019 "hostsvcid": "60000", 00:28:49.019 "prchk_reftag": false, 00:28:49.019 "prchk_guard": false, 00:28:49.019 "hdgst": false, 00:28:49.019 "ddgst": false, 00:28:49.019 "multipath": "failover", 00:28:49.019 "method": "bdev_nvme_attach_controller", 00:28:49.019 "req_id": 1 00:28:49.019 } 00:28:49.019 Got JSON-RPC error response 00:28:49.019 response: 00:28:49.019 { 00:28:49.019 "code": -114, 00:28:49.019 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:49.019 } 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:49.019 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.019 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:49.278 00:28:49.278 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.278 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:49.278 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:49.278 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.278 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:49.278 00:54:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.278 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:49.278 00:54:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:50.214 0 00:28:50.214 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:50.214 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.214 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:50.214 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.214 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1516606 00:28:50.214 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1516606 ']' 00:28:50.214 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1516606 00:28:50.214 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:50.214 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:50.214 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1516606 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1516606' 00:28:50.473 killing process with pid 1516606 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1516606 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1516606 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:28:50.473 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:50.473 [2024-07-13 00:53:59.939667] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:50.473 [2024-07-13 00:53:59.939721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516606 ] 00:28:50.473 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.473 [2024-07-13 00:54:00.009152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.473 [2024-07-13 00:54:00.051573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.473 [2024-07-13 00:54:00.580695] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 42f0b812-a891-4fc5-bc55-b5aba26615af already exists 00:28:50.473 [2024-07-13 00:54:00.580725] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:42f0b812-a891-4fc5-bc55-b5aba26615af alias for bdev NVMe1n1 00:28:50.473 [2024-07-13 00:54:00.580732] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:50.473 Running I/O for 1 seconds... 00:28:50.473 00:28:50.473 Latency(us) 00:28:50.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.473 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:50.473 NVMe0n1 : 1.01 23235.11 90.76 0.00 0.00 5490.80 1552.92 6525.11 00:28:50.473 =================================================================================================================== 00:28:50.473 Total : 23235.11 90.76 0.00 0.00 5490.80 1552.92 6525.11 00:28:50.473 Received shutdown signal, test time was about 1.000000 seconds 00:28:50.473 00:28:50.473 Latency(us) 00:28:50.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.473 =================================================================================================================== 00:28:50.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.473 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:50.473 00:54:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:50.474 00:54:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:50.474 00:54:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:50.474 rmmod nvme_tcp 00:28:50.474 rmmod nvme_fabrics 00:28:50.732 rmmod nvme_keyring 00:28:50.732 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:50.732 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:50.732 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:50.732 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1516365 ']' 00:28:50.732 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1516365 00:28:50.732 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1516365 ']' 00:28:50.732 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1516365 00:28:50.732 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:50.732 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:50.732 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1516365 00:28:50.732 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:50.732 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:50.732 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1516365' 00:28:50.732 killing process with pid 1516365 00:28:50.732 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1516365 00:28:50.732 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1516365 00:28:50.991 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:50.991 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:50.991 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:50.991 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:50.991 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:50.991 00:54:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.991 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.991 00:54:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.949 00:54:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:52.949 00:28:52.949 real 0m11.433s 00:28:52.949 user 0m13.518s 00:28:52.949 sys 0m5.205s 00:28:52.949 00:54:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:52.949 00:54:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:52.949 ************************************ 00:28:52.949 END TEST nvmf_multicontroller 00:28:52.949 ************************************ 00:28:52.949 00:54:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:52.949 00:54:04 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:52.949 00:54:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:52.949 00:54:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:52.949 00:54:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:52.949 ************************************ 00:28:52.949 START TEST nvmf_aer 00:28:52.949 ************************************ 00:28:52.949 00:54:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:53.210 * Looking for test storage... 00:28:53.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:53.210 00:54:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:59.784 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.784 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:59.784 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:59.785 Found net devices under 0000:86:00.0: cvl_0_0 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:59.785 Found net devices under 0000:86:00.1: cvl_0_1 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:59.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:28:59.785 00:28:59.785 --- 10.0.0.2 ping statistics --- 00:28:59.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.785 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:28:59.785 00:28:59.785 --- 10.0.0.1 ping statistics --- 00:28:59.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.785 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1520888 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1520888 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1520888 ']' 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.785 [2024-07-13 00:54:10.418832] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:59.785 [2024-07-13 00:54:10.418878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.785 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.785 [2024-07-13 00:54:10.490555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.785 [2024-07-13 00:54:10.532352] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.785 [2024-07-13 00:54:10.532391] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.785 [2024-07-13 00:54:10.532398] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.785 [2024-07-13 00:54:10.532404] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.785 [2024-07-13 00:54:10.532409] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.785 [2024-07-13 00:54:10.532482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.785 [2024-07-13 00:54:10.532594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.785 [2024-07-13 00:54:10.532700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.785 [2024-07-13 00:54:10.532702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.785 [2024-07-13 00:54:10.675245] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.785 Malloc0 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.785 [2024-07-13 00:54:10.726882] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.785 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.785 [ 00:28:59.785 { 00:28:59.785 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:59.785 "subtype": "Discovery", 00:28:59.785 "listen_addresses": [], 00:28:59.785 "allow_any_host": true, 00:28:59.785 "hosts": [] 00:28:59.785 }, 00:28:59.785 { 00:28:59.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:59.785 "subtype": "NVMe", 00:28:59.785 "listen_addresses": [ 00:28:59.786 { 00:28:59.786 "trtype": "TCP", 00:28:59.786 "adrfam": "IPv4", 00:28:59.786 "traddr": "10.0.0.2", 00:28:59.786 "trsvcid": "4420" 00:28:59.786 } 00:28:59.786 ], 00:28:59.786 "allow_any_host": true, 00:28:59.786 "hosts": [], 00:28:59.786 "serial_number": "SPDK00000000000001", 00:28:59.786 "model_number": "SPDK bdev Controller", 00:28:59.786 "max_namespaces": 2, 00:28:59.786 "min_cntlid": 1, 00:28:59.786 "max_cntlid": 65519, 00:28:59.786 "namespaces": [ 00:28:59.786 { 00:28:59.786 "nsid": 1, 00:28:59.786 "bdev_name": "Malloc0", 00:28:59.786 "name": "Malloc0", 00:28:59.786 "nguid": "1C790F928CBD4BD79A358AF5037B7D35", 00:28:59.786 "uuid": "1c790f92-8cbd-4bd7-9a35-8af5037b7d35" 00:28:59.786 } 00:28:59.786 ] 00:28:59.786 } 00:28:59.786 ] 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1520919 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:59.786 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.786 Malloc1 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.786 00:54:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.786 Asynchronous Event Request test 00:28:59.786 Attaching to 10.0.0.2 00:28:59.786 Attached to 10.0.0.2 00:28:59.786 Registering asynchronous event callbacks... 00:28:59.786 Starting namespace attribute notice tests for all controllers... 00:28:59.786 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:59.786 aer_cb - Changed Namespace 00:28:59.786 Cleaning up... 00:28:59.786 [ 00:28:59.786 { 00:28:59.786 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:59.786 "subtype": "Discovery", 00:28:59.786 "listen_addresses": [], 00:28:59.786 "allow_any_host": true, 00:28:59.786 "hosts": [] 00:28:59.786 }, 00:28:59.786 { 00:28:59.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:59.786 "subtype": "NVMe", 00:28:59.786 "listen_addresses": [ 00:28:59.786 { 00:28:59.786 "trtype": "TCP", 00:28:59.786 "adrfam": "IPv4", 00:28:59.786 "traddr": "10.0.0.2", 00:28:59.786 "trsvcid": "4420" 00:28:59.786 } 00:28:59.786 ], 00:28:59.786 "allow_any_host": true, 00:28:59.786 "hosts": [], 00:28:59.786 "serial_number": "SPDK00000000000001", 00:28:59.786 "model_number": "SPDK bdev Controller", 00:28:59.786 "max_namespaces": 2, 00:28:59.786 "min_cntlid": 1, 00:28:59.786 "max_cntlid": 65519, 00:28:59.786 "namespaces": [ 00:28:59.786 { 00:28:59.786 "nsid": 1, 00:28:59.786 "bdev_name": "Malloc0", 00:28:59.786 "name": "Malloc0", 00:28:59.786 "nguid": "1C790F928CBD4BD79A358AF5037B7D35", 00:28:59.786 "uuid": "1c790f92-8cbd-4bd7-9a35-8af5037b7d35" 00:28:59.786 }, 00:28:59.786 { 00:28:59.786 "nsid": 2, 00:28:59.786 "bdev_name": "Malloc1", 00:28:59.786 "name": "Malloc1", 00:28:59.786 "nguid": "DAC8241DE8864330B8D5CB213B125BF9", 00:28:59.786 "uuid": "dac8241d-e886-4330-b8d5-cb213b125bf9" 00:28:59.786 } 00:28:59.786 ] 00:28:59.786 } 00:28:59.786 ] 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1520919 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:59.786 rmmod nvme_tcp 00:28:59.786 rmmod nvme_fabrics 00:28:59.786 rmmod nvme_keyring 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1520888 ']' 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1520888 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1520888 ']' 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1520888 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1520888 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1520888' 00:28:59.786 killing process with pid 1520888 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1520888 00:28:59.786 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1520888 00:29:00.045 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:00.045 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:00.045 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:00.045 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:00.046 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:00.046 00:54:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.046 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:00.046 00:54:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.951 00:54:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:01.951 00:29:01.951 real 0m8.996s 00:29:01.951 user 0m4.941s 00:29:01.951 sys 0m4.725s 00:29:01.951 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:01.951 00:54:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:01.951 ************************************ 00:29:01.951 END TEST nvmf_aer 00:29:01.951 ************************************ 00:29:01.951 00:54:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:01.951 00:54:13 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:01.951 00:54:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:01.951 00:54:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.951 00:54:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.210 ************************************ 00:29:02.210 START TEST nvmf_async_init 00:29:02.210 ************************************ 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:02.210 * Looking for test storage... 00:29:02.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.210 00:54:13 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=cea216ace3cd44abbeef4d2a36dfc282 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:29:02.211 00:54:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:08.778 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:08.778 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:08.778 Found net devices under 0000:86:00.0: cvl_0_0 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:08.778 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:08.779 Found net devices under 0000:86:00.1: cvl_0_1 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:08.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:29:08.779 00:29:08.779 --- 10.0.0.2 ping statistics --- 00:29:08.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.779 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:29:08.779 00:29:08.779 --- 10.0.0.1 ping statistics --- 00:29:08.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.779 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1524434 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1524434 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1524434 ']' 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.779 [2024-07-13 00:54:19.466630] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:08.779 [2024-07-13 00:54:19.466671] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.779 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.779 [2024-07-13 00:54:19.536765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.779 [2024-07-13 00:54:19.576641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.779 [2024-07-13 00:54:19.576677] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.779 [2024-07-13 00:54:19.576684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.779 [2024-07-13 00:54:19.576690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.779 [2024-07-13 00:54:19.576695] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.779 [2024-07-13 00:54:19.576712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.779 [2024-07-13 00:54:19.704738] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.779 null0 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g cea216ace3cd44abbeef4d2a36dfc282 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.779 [2024-07-13 00:54:19.748935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.779 nvme0n1 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.779 00:54:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.779 [ 00:29:08.779 { 00:29:08.779 "name": "nvme0n1", 00:29:08.779 "aliases": [ 00:29:08.779 "cea216ac-e3cd-44ab-beef-4d2a36dfc282" 00:29:08.779 ], 00:29:08.779 "product_name": "NVMe disk", 00:29:08.779 "block_size": 512, 00:29:08.779 "num_blocks": 2097152, 00:29:08.779 "uuid": "cea216ac-e3cd-44ab-beef-4d2a36dfc282", 00:29:08.779 "assigned_rate_limits": { 00:29:08.779 "rw_ios_per_sec": 0, 00:29:08.779 "rw_mbytes_per_sec": 0, 00:29:08.779 "r_mbytes_per_sec": 0, 00:29:08.779 "w_mbytes_per_sec": 0 00:29:08.779 }, 00:29:08.779 "claimed": false, 00:29:08.779 "zoned": false, 00:29:08.779 "supported_io_types": { 00:29:08.779 "read": true, 00:29:08.779 "write": true, 00:29:08.779 "unmap": false, 00:29:08.779 "flush": true, 00:29:08.779 "reset": true, 00:29:08.779 "nvme_admin": true, 00:29:08.779 "nvme_io": true, 00:29:08.779 "nvme_io_md": false, 00:29:08.779 "write_zeroes": true, 00:29:08.779 "zcopy": false, 00:29:08.779 "get_zone_info": false, 00:29:08.779 "zone_management": false, 00:29:08.779 "zone_append": false, 00:29:08.779 "compare": true, 00:29:08.779 "compare_and_write": true, 00:29:08.779 "abort": true, 00:29:08.779 "seek_hole": false, 00:29:08.779 "seek_data": false, 00:29:08.779 "copy": true, 00:29:08.779 "nvme_iov_md": false 00:29:08.779 }, 00:29:08.779 "memory_domains": [ 00:29:08.779 { 00:29:08.779 "dma_device_id": "system", 00:29:08.779 "dma_device_type": 1 00:29:08.779 } 00:29:08.779 ], 00:29:08.779 "driver_specific": { 00:29:08.779 "nvme": [ 00:29:08.779 { 00:29:08.779 "trid": { 00:29:08.779 "trtype": "TCP", 00:29:08.779 "adrfam": "IPv4", 00:29:08.779 "traddr": "10.0.0.2", 00:29:08.779 "trsvcid": "4420", 00:29:08.779 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:08.779 }, 00:29:08.779 "ctrlr_data": { 00:29:08.779 "cntlid": 1, 00:29:08.779 "vendor_id": "0x8086", 00:29:08.779 "model_number": "SPDK bdev Controller", 00:29:08.779 "serial_number": "00000000000000000000", 00:29:08.779 "firmware_revision": "24.09", 00:29:08.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.779 "oacs": { 00:29:08.779 "security": 0, 00:29:08.779 "format": 0, 00:29:08.779 "firmware": 0, 00:29:08.779 "ns_manage": 0 00:29:08.779 }, 00:29:08.779 "multi_ctrlr": true, 00:29:08.780 "ana_reporting": false 00:29:08.780 }, 00:29:08.780 "vs": { 00:29:08.780 "nvme_version": "1.3" 00:29:08.780 }, 00:29:08.780 "ns_data": { 00:29:08.780 "id": 1, 00:29:08.780 "can_share": true 00:29:08.780 } 00:29:08.780 } 00:29:08.780 ], 00:29:08.780 "mp_policy": "active_passive" 00:29:08.780 } 00:29:08.780 } 00:29:08.780 ] 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.780 [2024-07-13 00:54:20.010403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:08.780 [2024-07-13 00:54:20.010457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2bce0 (9): Bad file descriptor 00:29:08.780 [2024-07-13 00:54:20.142312] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.780 [ 00:29:08.780 { 00:29:08.780 "name": "nvme0n1", 00:29:08.780 "aliases": [ 00:29:08.780 "cea216ac-e3cd-44ab-beef-4d2a36dfc282" 00:29:08.780 ], 00:29:08.780 "product_name": "NVMe disk", 00:29:08.780 "block_size": 512, 00:29:08.780 "num_blocks": 2097152, 00:29:08.780 "uuid": "cea216ac-e3cd-44ab-beef-4d2a36dfc282", 00:29:08.780 "assigned_rate_limits": { 00:29:08.780 "rw_ios_per_sec": 0, 00:29:08.780 "rw_mbytes_per_sec": 0, 00:29:08.780 "r_mbytes_per_sec": 0, 00:29:08.780 "w_mbytes_per_sec": 0 00:29:08.780 }, 00:29:08.780 "claimed": false, 00:29:08.780 "zoned": false, 00:29:08.780 "supported_io_types": { 00:29:08.780 "read": true, 00:29:08.780 "write": true, 00:29:08.780 "unmap": false, 00:29:08.780 "flush": true, 00:29:08.780 "reset": true, 00:29:08.780 "nvme_admin": true, 00:29:08.780 "nvme_io": true, 00:29:08.780 "nvme_io_md": false, 00:29:08.780 "write_zeroes": true, 00:29:08.780 "zcopy": false, 00:29:08.780 "get_zone_info": false, 00:29:08.780 "zone_management": false, 00:29:08.780 "zone_append": false, 00:29:08.780 "compare": true, 00:29:08.780 "compare_and_write": true, 00:29:08.780 "abort": true, 00:29:08.780 "seek_hole": false, 00:29:08.780 "seek_data": false, 00:29:08.780 "copy": true, 00:29:08.780 "nvme_iov_md": false 00:29:08.780 }, 00:29:08.780 "memory_domains": [ 00:29:08.780 { 00:29:08.780 "dma_device_id": "system", 00:29:08.780 "dma_device_type": 1 00:29:08.780 } 00:29:08.780 ], 00:29:08.780 "driver_specific": { 00:29:08.780 "nvme": [ 00:29:08.780 { 00:29:08.780 "trid": { 00:29:08.780 "trtype": "TCP", 00:29:08.780 "adrfam": "IPv4", 00:29:08.780 "traddr": "10.0.0.2", 00:29:08.780 "trsvcid": "4420", 00:29:08.780 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:08.780 }, 00:29:08.780 "ctrlr_data": { 00:29:08.780 "cntlid": 2, 00:29:08.780 "vendor_id": "0x8086", 00:29:08.780 "model_number": "SPDK bdev Controller", 00:29:08.780 "serial_number": "00000000000000000000", 00:29:08.780 "firmware_revision": "24.09", 00:29:08.780 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.780 "oacs": { 00:29:08.780 "security": 0, 00:29:08.780 "format": 0, 00:29:08.780 "firmware": 0, 00:29:08.780 "ns_manage": 0 00:29:08.780 }, 00:29:08.780 "multi_ctrlr": true, 00:29:08.780 "ana_reporting": false 00:29:08.780 }, 00:29:08.780 "vs": { 00:29:08.780 "nvme_version": "1.3" 00:29:08.780 }, 00:29:08.780 "ns_data": { 00:29:08.780 "id": 1, 00:29:08.780 "can_share": true 00:29:08.780 } 00:29:08.780 } 00:29:08.780 ], 00:29:08.780 "mp_policy": "active_passive" 00:29:08.780 } 00:29:08.780 } 00:29:08.780 ] 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.kmd3uYJfry 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.kmd3uYJfry 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.780 [2024-07-13 00:54:20.203011] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:08.780 [2024-07-13 00:54:20.203140] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kmd3uYJfry 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.780 [2024-07-13 00:54:20.211029] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kmd3uYJfry 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.780 [2024-07-13 00:54:20.219062] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:08.780 [2024-07-13 00:54:20.219098] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:08.780 nvme0n1 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.780 [ 00:29:08.780 { 00:29:08.780 "name": "nvme0n1", 00:29:08.780 "aliases": [ 00:29:08.780 "cea216ac-e3cd-44ab-beef-4d2a36dfc282" 00:29:08.780 ], 00:29:08.780 "product_name": "NVMe disk", 00:29:08.780 "block_size": 512, 00:29:08.780 "num_blocks": 2097152, 00:29:08.780 "uuid": "cea216ac-e3cd-44ab-beef-4d2a36dfc282", 00:29:08.780 "assigned_rate_limits": { 00:29:08.780 "rw_ios_per_sec": 0, 00:29:08.780 "rw_mbytes_per_sec": 0, 00:29:08.780 "r_mbytes_per_sec": 0, 00:29:08.780 "w_mbytes_per_sec": 0 00:29:08.780 }, 00:29:08.780 "claimed": false, 00:29:08.780 "zoned": false, 00:29:08.780 "supported_io_types": { 00:29:08.780 "read": true, 00:29:08.780 "write": true, 00:29:08.780 "unmap": false, 00:29:08.780 "flush": true, 00:29:08.780 "reset": true, 00:29:08.780 "nvme_admin": true, 00:29:08.780 "nvme_io": true, 00:29:08.780 "nvme_io_md": false, 00:29:08.780 "write_zeroes": true, 00:29:08.780 "zcopy": false, 00:29:08.780 "get_zone_info": false, 00:29:08.780 "zone_management": false, 00:29:08.780 "zone_append": false, 00:29:08.780 "compare": true, 00:29:08.780 "compare_and_write": true, 00:29:08.780 "abort": true, 00:29:08.780 "seek_hole": false, 00:29:08.780 "seek_data": false, 00:29:08.780 "copy": true, 00:29:08.780 "nvme_iov_md": false 00:29:08.780 }, 00:29:08.780 "memory_domains": [ 00:29:08.780 { 00:29:08.780 "dma_device_id": "system", 00:29:08.780 "dma_device_type": 1 00:29:08.780 } 00:29:08.780 ], 00:29:08.780 "driver_specific": { 00:29:08.780 "nvme": [ 00:29:08.780 { 00:29:08.780 "trid": { 00:29:08.780 "trtype": "TCP", 00:29:08.780 "adrfam": "IPv4", 00:29:08.780 "traddr": "10.0.0.2", 00:29:08.780 "trsvcid": "4421", 00:29:08.780 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:08.780 }, 00:29:08.780 "ctrlr_data": { 00:29:08.780 "cntlid": 3, 00:29:08.780 "vendor_id": "0x8086", 00:29:08.780 "model_number": "SPDK bdev Controller", 00:29:08.780 "serial_number": "00000000000000000000", 00:29:08.780 "firmware_revision": "24.09", 00:29:08.780 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.780 "oacs": { 00:29:08.780 "security": 0, 00:29:08.780 "format": 0, 00:29:08.780 "firmware": 0, 00:29:08.780 "ns_manage": 0 00:29:08.780 }, 00:29:08.780 "multi_ctrlr": true, 00:29:08.780 "ana_reporting": false 00:29:08.780 }, 00:29:08.780 "vs": { 00:29:08.780 "nvme_version": "1.3" 00:29:08.780 }, 00:29:08.780 "ns_data": { 00:29:08.780 "id": 1, 00:29:08.780 "can_share": true 00:29:08.780 } 00:29:08.780 } 00:29:08.780 ], 00:29:08.780 "mp_policy": "active_passive" 00:29:08.780 } 00:29:08.780 } 00:29:08.780 ] 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.kmd3uYJfry 00:29:08.780 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:08.781 00:54:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:29:08.781 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:08.781 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:29:08.781 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:08.781 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:29:08.781 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:08.781 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:09.039 rmmod nvme_tcp 00:29:09.039 rmmod nvme_fabrics 00:29:09.039 rmmod nvme_keyring 00:29:09.039 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:09.039 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:29:09.039 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:29:09.039 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1524434 ']' 00:29:09.039 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1524434 00:29:09.039 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1524434 ']' 00:29:09.039 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1524434 00:29:09.039 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:29:09.039 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:09.039 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1524434 00:29:09.039 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:09.039 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:09.039 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1524434' 00:29:09.039 killing process with pid 1524434 00:29:09.039 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1524434 00:29:09.039 [2024-07-13 00:54:20.446613] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:09.039 [2024-07-13 00:54:20.446636] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:09.039 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1524434 00:29:09.296 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:09.296 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:09.296 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:09.296 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:09.296 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:09.296 00:54:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.296 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:09.296 00:54:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.201 00:54:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:11.201 00:29:11.201 real 0m9.142s 00:29:11.201 user 0m2.871s 00:29:11.201 sys 0m4.653s 00:29:11.201 00:54:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:11.201 00:54:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:11.201 ************************************ 00:29:11.201 END TEST nvmf_async_init 00:29:11.201 ************************************ 00:29:11.201 00:54:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:11.201 00:54:22 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:11.201 00:54:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:11.201 00:54:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:11.201 00:54:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:11.201 ************************************ 00:29:11.201 START TEST dma 00:29:11.201 ************************************ 00:29:11.201 00:54:22 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:11.461 * Looking for test storage... 00:29:11.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:11.461 00:54:22 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.461 00:54:22 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.461 00:54:22 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.461 00:54:22 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.461 00:54:22 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.461 00:54:22 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.461 00:54:22 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.461 00:54:22 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:29:11.461 00:54:22 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:11.461 00:54:22 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:11.461 00:54:22 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:11.461 00:54:22 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:29:11.461 00:29:11.461 real 0m0.123s 00:29:11.461 user 0m0.055s 00:29:11.461 sys 0m0.076s 00:29:11.461 00:54:22 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:11.461 00:54:22 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:29:11.461 ************************************ 00:29:11.461 END TEST dma 00:29:11.461 ************************************ 00:29:11.461 00:54:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:11.461 00:54:22 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:11.461 00:54:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:11.461 00:54:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:11.461 00:54:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:11.461 ************************************ 00:29:11.461 START TEST nvmf_identify 00:29:11.461 ************************************ 00:29:11.461 00:54:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:11.720 * Looking for test storage... 00:29:11.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:29:11.720 00:54:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:16.992 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:16.992 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:16.992 Found net devices under 0000:86:00.0: cvl_0_0 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:16.992 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:16.993 Found net devices under 0000:86:00.1: cvl_0_1 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:16.993 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:17.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:29:17.252 00:29:17.252 --- 10.0.0.2 ping statistics --- 00:29:17.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.252 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:29:17.252 00:29:17.252 --- 10.0.0.1 ping statistics --- 00:29:17.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.252 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1528156 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1528156 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1528156 ']' 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:17.252 00:54:28 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:17.512 [2024-07-13 00:54:28.851670] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:17.512 [2024-07-13 00:54:28.851717] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.512 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.512 [2024-07-13 00:54:28.924262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.512 [2024-07-13 00:54:28.967563] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.512 [2024-07-13 00:54:28.967603] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.512 [2024-07-13 00:54:28.967609] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.512 [2024-07-13 00:54:28.967615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.512 [2024-07-13 00:54:28.967620] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.512 [2024-07-13 00:54:28.967693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.512 [2024-07-13 00:54:28.967799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.512 [2024-07-13 00:54:28.967905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.512 [2024-07-13 00:54:28.967907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:18.451 [2024-07-13 00:54:29.669194] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:18.451 Malloc0 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:18.451 [2024-07-13 00:54:29.752890] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:18.451 [ 00:29:18.451 { 00:29:18.451 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:18.451 "subtype": "Discovery", 00:29:18.451 "listen_addresses": [ 00:29:18.451 { 00:29:18.451 "trtype": "TCP", 00:29:18.451 "adrfam": "IPv4", 00:29:18.451 "traddr": "10.0.0.2", 00:29:18.451 "trsvcid": "4420" 00:29:18.451 } 00:29:18.451 ], 00:29:18.451 "allow_any_host": true, 00:29:18.451 "hosts": [] 00:29:18.451 }, 00:29:18.451 { 00:29:18.451 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:18.451 "subtype": "NVMe", 00:29:18.451 "listen_addresses": [ 00:29:18.451 { 00:29:18.451 "trtype": "TCP", 00:29:18.451 "adrfam": "IPv4", 00:29:18.451 "traddr": "10.0.0.2", 00:29:18.451 "trsvcid": "4420" 00:29:18.451 } 00:29:18.451 ], 00:29:18.451 "allow_any_host": true, 00:29:18.451 "hosts": [], 00:29:18.451 "serial_number": "SPDK00000000000001", 00:29:18.451 "model_number": "SPDK bdev Controller", 00:29:18.451 "max_namespaces": 32, 00:29:18.451 "min_cntlid": 1, 00:29:18.451 "max_cntlid": 65519, 00:29:18.451 "namespaces": [ 00:29:18.451 { 00:29:18.451 "nsid": 1, 00:29:18.451 "bdev_name": "Malloc0", 00:29:18.451 "name": "Malloc0", 00:29:18.451 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:18.451 "eui64": "ABCDEF0123456789", 00:29:18.451 "uuid": "ff2c462b-7974-4820-b106-8a11cac2d898" 00:29:18.451 } 00:29:18.451 ] 00:29:18.451 } 00:29:18.451 ] 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.451 00:54:29 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:18.451 [2024-07-13 00:54:29.803555] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:18.451 [2024-07-13 00:54:29.803595] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528268 ] 00:29:18.451 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.451 [2024-07-13 00:54:29.832773] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:18.451 [2024-07-13 00:54:29.832819] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:18.451 [2024-07-13 00:54:29.832824] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:18.451 [2024-07-13 00:54:29.832833] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:18.451 [2024-07-13 00:54:29.832839] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:18.451 [2024-07-13 00:54:29.833108] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:18.451 [2024-07-13 00:54:29.833137] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2048af0 0 00:29:18.451 [2024-07-13 00:54:29.847233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:18.451 [2024-07-13 00:54:29.847245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:18.451 [2024-07-13 00:54:29.847249] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:18.451 [2024-07-13 00:54:29.847252] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:18.451 [2024-07-13 00:54:29.847287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.451 [2024-07-13 00:54:29.847292] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.451 [2024-07-13 00:54:29.847296] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048af0) 00:29:18.451 [2024-07-13 00:54:29.847308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:18.451 [2024-07-13 00:54:29.847322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5340, cid 0, qid 0 00:29:18.451 [2024-07-13 00:54:29.854234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.451 [2024-07-13 00:54:29.854243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.451 [2024-07-13 00:54:29.854246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.451 [2024-07-13 00:54:29.854249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5340) on tqpair=0x2048af0 00:29:18.451 [2024-07-13 00:54:29.854261] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:18.451 [2024-07-13 00:54:29.854267] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:18.451 [2024-07-13 00:54:29.854274] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:18.451 [2024-07-13 00:54:29.854287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.451 [2024-07-13 00:54:29.854290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.451 [2024-07-13 00:54:29.854293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048af0) 00:29:18.451 [2024-07-13 00:54:29.854301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.451 [2024-07-13 00:54:29.854313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5340, cid 0, qid 0 00:29:18.451 [2024-07-13 00:54:29.854403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.451 [2024-07-13 00:54:29.854409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.451 [2024-07-13 00:54:29.854412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.451 [2024-07-13 00:54:29.854415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5340) on tqpair=0x2048af0 00:29:18.451 [2024-07-13 00:54:29.854420] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:18.451 [2024-07-13 00:54:29.854426] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:18.451 [2024-07-13 00:54:29.854432] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.451 [2024-07-13 00:54:29.854435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.451 [2024-07-13 00:54:29.854438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048af0) 00:29:18.451 [2024-07-13 00:54:29.854444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.451 [2024-07-13 00:54:29.854454] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5340, cid 0, qid 0 00:29:18.451 [2024-07-13 00:54:29.854520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.451 [2024-07-13 00:54:29.854526] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.452 [2024-07-13 00:54:29.854529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.854532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5340) on tqpair=0x2048af0 00:29:18.452 [2024-07-13 00:54:29.854537] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:18.452 [2024-07-13 00:54:29.854544] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:18.452 [2024-07-13 00:54:29.854550] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.854553] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.854556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048af0) 00:29:18.452 [2024-07-13 00:54:29.854561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.452 [2024-07-13 00:54:29.854571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5340, cid 0, qid 0 00:29:18.452 [2024-07-13 00:54:29.854637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.452 [2024-07-13 00:54:29.854642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.452 [2024-07-13 00:54:29.854645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.854649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5340) on tqpair=0x2048af0 00:29:18.452 [2024-07-13 00:54:29.854653] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:18.452 [2024-07-13 00:54:29.854661] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.854666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.854669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048af0) 00:29:18.452 [2024-07-13 00:54:29.854675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.452 [2024-07-13 00:54:29.854684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5340, cid 0, qid 0 00:29:18.452 [2024-07-13 00:54:29.854742] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.452 [2024-07-13 00:54:29.854748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.452 [2024-07-13 00:54:29.854751] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.854754] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5340) on tqpair=0x2048af0 00:29:18.452 [2024-07-13 00:54:29.854757] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:18.452 [2024-07-13 00:54:29.854762] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:18.452 [2024-07-13 00:54:29.854769] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:18.452 [2024-07-13 00:54:29.854873] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:18.452 [2024-07-13 00:54:29.854877] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:18.452 [2024-07-13 00:54:29.854884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.854887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.854890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048af0) 00:29:18.452 [2024-07-13 00:54:29.854896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.452 [2024-07-13 00:54:29.854905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5340, cid 0, qid 0 00:29:18.452 [2024-07-13 00:54:29.854966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.452 [2024-07-13 00:54:29.854972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.452 [2024-07-13 00:54:29.854975] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.854978] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5340) on tqpair=0x2048af0 00:29:18.452 [2024-07-13 00:54:29.854982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:18.452 [2024-07-13 00:54:29.854990] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.854993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.854996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048af0) 00:29:18.452 [2024-07-13 00:54:29.855002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.452 [2024-07-13 00:54:29.855011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5340, cid 0, qid 0 00:29:18.452 [2024-07-13 00:54:29.855083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.452 [2024-07-13 00:54:29.855088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.452 [2024-07-13 00:54:29.855091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.855094] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5340) on tqpair=0x2048af0 00:29:18.452 [2024-07-13 00:54:29.855098] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:18.452 [2024-07-13 00:54:29.855104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:18.452 [2024-07-13 00:54:29.855110] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:18.452 [2024-07-13 00:54:29.855117] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:18.452 [2024-07-13 00:54:29.855125] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.855128] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048af0) 00:29:18.452 [2024-07-13 00:54:29.855133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.452 [2024-07-13 00:54:29.855142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5340, cid 0, qid 0 00:29:18.452 [2024-07-13 00:54:29.855263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:18.452 [2024-07-13 00:54:29.855268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:18.452 [2024-07-13 00:54:29.855271] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.855275] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2048af0): datao=0, datal=4096, cccid=0 00:29:18.452 [2024-07-13 00:54:29.855278] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20b5340) on tqpair(0x2048af0): expected_datao=0, payload_size=4096 00:29:18.452 [2024-07-13 00:54:29.855282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.855294] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.855297] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.897231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.452 [2024-07-13 00:54:29.897242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.452 [2024-07-13 00:54:29.897246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.897249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5340) on tqpair=0x2048af0 00:29:18.452 [2024-07-13 00:54:29.897256] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:18.452 [2024-07-13 00:54:29.897264] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:18.452 [2024-07-13 00:54:29.897268] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:18.452 [2024-07-13 00:54:29.897272] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:18.452 [2024-07-13 00:54:29.897276] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:18.452 [2024-07-13 00:54:29.897281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:18.452 [2024-07-13 00:54:29.897289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:18.452 [2024-07-13 00:54:29.897295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.897299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.897302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048af0) 00:29:18.452 [2024-07-13 00:54:29.897309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:18.452 [2024-07-13 00:54:29.897322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5340, cid 0, qid 0 00:29:18.452 [2024-07-13 00:54:29.897386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.452 [2024-07-13 00:54:29.897395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.452 [2024-07-13 00:54:29.897398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.897401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5340) on tqpair=0x2048af0 00:29:18.452 [2024-07-13 00:54:29.897408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.897411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.897415] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2048af0) 00:29:18.452 [2024-07-13 00:54:29.897420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.452 [2024-07-13 00:54:29.897425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.897428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.897431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2048af0) 00:29:18.452 [2024-07-13 00:54:29.897436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.452 [2024-07-13 00:54:29.897441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.897444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.897448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2048af0) 00:29:18.452 [2024-07-13 00:54:29.897452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.452 [2024-07-13 00:54:29.897457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.897461] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.897464] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.452 [2024-07-13 00:54:29.897468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.452 [2024-07-13 00:54:29.897473] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:18.452 [2024-07-13 00:54:29.897482] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:18.452 [2024-07-13 00:54:29.897488] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.452 [2024-07-13 00:54:29.897491] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2048af0) 00:29:18.453 [2024-07-13 00:54:29.897497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.453 [2024-07-13 00:54:29.897508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5340, cid 0, qid 0 00:29:18.453 [2024-07-13 00:54:29.897513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b54c0, cid 1, qid 0 00:29:18.453 [2024-07-13 00:54:29.897517] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5640, cid 2, qid 0 00:29:18.453 [2024-07-13 00:54:29.897521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.453 [2024-07-13 00:54:29.897525] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5940, cid 4, qid 0 00:29:18.453 [2024-07-13 00:54:29.897621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.453 [2024-07-13 00:54:29.897627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.453 [2024-07-13 00:54:29.897630] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.897633] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5940) on tqpair=0x2048af0 00:29:18.453 [2024-07-13 00:54:29.897637] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:18.453 [2024-07-13 00:54:29.897643] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:18.453 [2024-07-13 00:54:29.897652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.897656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2048af0) 00:29:18.453 [2024-07-13 00:54:29.897661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.453 [2024-07-13 00:54:29.897671] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5940, cid 4, qid 0 00:29:18.453 [2024-07-13 00:54:29.897743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:18.453 [2024-07-13 00:54:29.897748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:18.453 [2024-07-13 00:54:29.897752] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.897755] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2048af0): datao=0, datal=4096, cccid=4 00:29:18.453 [2024-07-13 00:54:29.897759] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20b5940) on tqpair(0x2048af0): expected_datao=0, payload_size=4096 00:29:18.453 [2024-07-13 00:54:29.897762] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.897773] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.897777] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.897808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.453 [2024-07-13 00:54:29.897814] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.453 [2024-07-13 00:54:29.897817] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.897820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5940) on tqpair=0x2048af0 00:29:18.453 [2024-07-13 00:54:29.897832] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:18.453 [2024-07-13 00:54:29.897851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.897855] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2048af0) 00:29:18.453 [2024-07-13 00:54:29.897861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.453 [2024-07-13 00:54:29.897867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.897870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.897873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2048af0) 00:29:18.453 [2024-07-13 00:54:29.897878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.453 [2024-07-13 00:54:29.897893] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5940, cid 4, qid 0 00:29:18.453 [2024-07-13 00:54:29.897897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5ac0, cid 5, qid 0 00:29:18.453 [2024-07-13 00:54:29.897993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:18.453 [2024-07-13 00:54:29.897998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:18.453 [2024-07-13 00:54:29.898002] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.898005] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2048af0): datao=0, datal=1024, cccid=4 00:29:18.453 [2024-07-13 00:54:29.898008] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20b5940) on tqpair(0x2048af0): expected_datao=0, payload_size=1024 00:29:18.453 [2024-07-13 00:54:29.898012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.898018] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.898023] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.898027] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.453 [2024-07-13 00:54:29.898032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.453 [2024-07-13 00:54:29.898035] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.898039] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5ac0) on tqpair=0x2048af0 00:29:18.453 [2024-07-13 00:54:29.939289] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.453 [2024-07-13 00:54:29.939301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.453 [2024-07-13 00:54:29.939304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.939308] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5940) on tqpair=0x2048af0 00:29:18.453 [2024-07-13 00:54:29.939318] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.939321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2048af0) 00:29:18.453 [2024-07-13 00:54:29.939329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.453 [2024-07-13 00:54:29.939345] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5940, cid 4, qid 0 00:29:18.453 [2024-07-13 00:54:29.939446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:18.453 [2024-07-13 00:54:29.939452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:18.453 [2024-07-13 00:54:29.939455] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.939458] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2048af0): datao=0, datal=3072, cccid=4 00:29:18.453 [2024-07-13 00:54:29.939462] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20b5940) on tqpair(0x2048af0): expected_datao=0, payload_size=3072 00:29:18.453 [2024-07-13 00:54:29.939466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.939478] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.939482] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.980293] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.453 [2024-07-13 00:54:29.980307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.453 [2024-07-13 00:54:29.980310] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.980314] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5940) on tqpair=0x2048af0 00:29:18.453 [2024-07-13 00:54:29.980323] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.980326] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2048af0) 00:29:18.453 [2024-07-13 00:54:29.980333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.453 [2024-07-13 00:54:29.980348] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b5940, cid 4, qid 0 00:29:18.453 [2024-07-13 00:54:29.980437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:18.453 [2024-07-13 00:54:29.980443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:18.453 [2024-07-13 00:54:29.980446] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.980449] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2048af0): datao=0, datal=8, cccid=4 00:29:18.453 [2024-07-13 00:54:29.980453] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20b5940) on tqpair(0x2048af0): expected_datao=0, payload_size=8 00:29:18.453 [2024-07-13 00:54:29.980457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.980463] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:18.453 [2024-07-13 00:54:29.980466] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:18.719 [2024-07-13 00:54:30.025233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.719 [2024-07-13 00:54:30.025246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.719 [2024-07-13 00:54:30.025249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.719 [2024-07-13 00:54:30.025253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5940) on tqpair=0x2048af0 00:29:18.719 ===================================================== 00:29:18.719 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:18.719 ===================================================== 00:29:18.719 Controller Capabilities/Features 00:29:18.719 ================================ 00:29:18.719 Vendor ID: 0000 00:29:18.719 Subsystem Vendor ID: 0000 00:29:18.719 Serial Number: .................... 00:29:18.719 Model Number: ........................................ 00:29:18.719 Firmware Version: 24.09 00:29:18.719 Recommended Arb Burst: 0 00:29:18.719 IEEE OUI Identifier: 00 00 00 00:29:18.719 Multi-path I/O 00:29:18.719 May have multiple subsystem ports: No 00:29:18.719 May have multiple controllers: No 00:29:18.719 Associated with SR-IOV VF: No 00:29:18.719 Max Data Transfer Size: 131072 00:29:18.719 Max Number of Namespaces: 0 00:29:18.719 Max Number of I/O Queues: 1024 00:29:18.719 NVMe Specification Version (VS): 1.3 00:29:18.719 NVMe Specification Version (Identify): 1.3 00:29:18.719 Maximum Queue Entries: 128 00:29:18.719 Contiguous Queues Required: Yes 00:29:18.719 Arbitration Mechanisms Supported 00:29:18.719 Weighted Round Robin: Not Supported 00:29:18.719 Vendor Specific: Not Supported 00:29:18.719 Reset Timeout: 15000 ms 00:29:18.719 Doorbell Stride: 4 bytes 00:29:18.719 NVM Subsystem Reset: Not Supported 00:29:18.719 Command Sets Supported 00:29:18.719 NVM Command Set: Supported 00:29:18.719 Boot Partition: Not Supported 00:29:18.719 Memory Page Size Minimum: 4096 bytes 00:29:18.719 Memory Page Size Maximum: 4096 bytes 00:29:18.719 Persistent Memory Region: Not Supported 00:29:18.719 Optional Asynchronous Events Supported 00:29:18.719 Namespace Attribute Notices: Not Supported 00:29:18.719 Firmware Activation Notices: Not Supported 00:29:18.719 ANA Change Notices: Not Supported 00:29:18.719 PLE Aggregate Log Change Notices: Not Supported 00:29:18.719 LBA Status Info Alert Notices: Not Supported 00:29:18.719 EGE Aggregate Log Change Notices: Not Supported 00:29:18.719 Normal NVM Subsystem Shutdown event: Not Supported 00:29:18.719 Zone Descriptor Change Notices: Not Supported 00:29:18.719 Discovery Log Change Notices: Supported 00:29:18.719 Controller Attributes 00:29:18.719 128-bit Host Identifier: Not Supported 00:29:18.719 Non-Operational Permissive Mode: Not Supported 00:29:18.719 NVM Sets: Not Supported 00:29:18.719 Read Recovery Levels: Not Supported 00:29:18.719 Endurance Groups: Not Supported 00:29:18.719 Predictable Latency Mode: Not Supported 00:29:18.719 Traffic Based Keep ALive: Not Supported 00:29:18.719 Namespace Granularity: Not Supported 00:29:18.719 SQ Associations: Not Supported 00:29:18.719 UUID List: Not Supported 00:29:18.719 Multi-Domain Subsystem: Not Supported 00:29:18.719 Fixed Capacity Management: Not Supported 00:29:18.719 Variable Capacity Management: Not Supported 00:29:18.719 Delete Endurance Group: Not Supported 00:29:18.719 Delete NVM Set: Not Supported 00:29:18.719 Extended LBA Formats Supported: Not Supported 00:29:18.719 Flexible Data Placement Supported: Not Supported 00:29:18.719 00:29:18.719 Controller Memory Buffer Support 00:29:18.719 ================================ 00:29:18.719 Supported: No 00:29:18.719 00:29:18.719 Persistent Memory Region Support 00:29:18.719 ================================ 00:29:18.719 Supported: No 00:29:18.719 00:29:18.719 Admin Command Set Attributes 00:29:18.719 ============================ 00:29:18.719 Security Send/Receive: Not Supported 00:29:18.719 Format NVM: Not Supported 00:29:18.719 Firmware Activate/Download: Not Supported 00:29:18.719 Namespace Management: Not Supported 00:29:18.719 Device Self-Test: Not Supported 00:29:18.719 Directives: Not Supported 00:29:18.719 NVMe-MI: Not Supported 00:29:18.719 Virtualization Management: Not Supported 00:29:18.719 Doorbell Buffer Config: Not Supported 00:29:18.719 Get LBA Status Capability: Not Supported 00:29:18.719 Command & Feature Lockdown Capability: Not Supported 00:29:18.719 Abort Command Limit: 1 00:29:18.719 Async Event Request Limit: 4 00:29:18.719 Number of Firmware Slots: N/A 00:29:18.719 Firmware Slot 1 Read-Only: N/A 00:29:18.719 Firmware Activation Without Reset: N/A 00:29:18.719 Multiple Update Detection Support: N/A 00:29:18.719 Firmware Update Granularity: No Information Provided 00:29:18.719 Per-Namespace SMART Log: No 00:29:18.719 Asymmetric Namespace Access Log Page: Not Supported 00:29:18.719 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:18.719 Command Effects Log Page: Not Supported 00:29:18.719 Get Log Page Extended Data: Supported 00:29:18.719 Telemetry Log Pages: Not Supported 00:29:18.719 Persistent Event Log Pages: Not Supported 00:29:18.719 Supported Log Pages Log Page: May Support 00:29:18.719 Commands Supported & Effects Log Page: Not Supported 00:29:18.719 Feature Identifiers & Effects Log Page:May Support 00:29:18.719 NVMe-MI Commands & Effects Log Page: May Support 00:29:18.719 Data Area 4 for Telemetry Log: Not Supported 00:29:18.719 Error Log Page Entries Supported: 128 00:29:18.719 Keep Alive: Not Supported 00:29:18.719 00:29:18.719 NVM Command Set Attributes 00:29:18.719 ========================== 00:29:18.719 Submission Queue Entry Size 00:29:18.719 Max: 1 00:29:18.719 Min: 1 00:29:18.719 Completion Queue Entry Size 00:29:18.719 Max: 1 00:29:18.719 Min: 1 00:29:18.719 Number of Namespaces: 0 00:29:18.719 Compare Command: Not Supported 00:29:18.719 Write Uncorrectable Command: Not Supported 00:29:18.719 Dataset Management Command: Not Supported 00:29:18.719 Write Zeroes Command: Not Supported 00:29:18.719 Set Features Save Field: Not Supported 00:29:18.719 Reservations: Not Supported 00:29:18.719 Timestamp: Not Supported 00:29:18.719 Copy: Not Supported 00:29:18.719 Volatile Write Cache: Not Present 00:29:18.719 Atomic Write Unit (Normal): 1 00:29:18.719 Atomic Write Unit (PFail): 1 00:29:18.719 Atomic Compare & Write Unit: 1 00:29:18.719 Fused Compare & Write: Supported 00:29:18.719 Scatter-Gather List 00:29:18.719 SGL Command Set: Supported 00:29:18.719 SGL Keyed: Supported 00:29:18.719 SGL Bit Bucket Descriptor: Not Supported 00:29:18.719 SGL Metadata Pointer: Not Supported 00:29:18.719 Oversized SGL: Not Supported 00:29:18.719 SGL Metadata Address: Not Supported 00:29:18.719 SGL Offset: Supported 00:29:18.719 Transport SGL Data Block: Not Supported 00:29:18.719 Replay Protected Memory Block: Not Supported 00:29:18.719 00:29:18.719 Firmware Slot Information 00:29:18.719 ========================= 00:29:18.719 Active slot: 0 00:29:18.719 00:29:18.719 00:29:18.719 Error Log 00:29:18.719 ========= 00:29:18.719 00:29:18.719 Active Namespaces 00:29:18.719 ================= 00:29:18.719 Discovery Log Page 00:29:18.719 ================== 00:29:18.719 Generation Counter: 2 00:29:18.719 Number of Records: 2 00:29:18.719 Record Format: 0 00:29:18.719 00:29:18.719 Discovery Log Entry 0 00:29:18.720 ---------------------- 00:29:18.720 Transport Type: 3 (TCP) 00:29:18.720 Address Family: 1 (IPv4) 00:29:18.720 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:18.720 Entry Flags: 00:29:18.720 Duplicate Returned Information: 1 00:29:18.720 Explicit Persistent Connection Support for Discovery: 1 00:29:18.720 Transport Requirements: 00:29:18.720 Secure Channel: Not Required 00:29:18.720 Port ID: 0 (0x0000) 00:29:18.720 Controller ID: 65535 (0xffff) 00:29:18.720 Admin Max SQ Size: 128 00:29:18.720 Transport Service Identifier: 4420 00:29:18.720 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:18.720 Transport Address: 10.0.0.2 00:29:18.720 Discovery Log Entry 1 00:29:18.720 ---------------------- 00:29:18.720 Transport Type: 3 (TCP) 00:29:18.720 Address Family: 1 (IPv4) 00:29:18.720 Subsystem Type: 2 (NVM Subsystem) 00:29:18.720 Entry Flags: 00:29:18.720 Duplicate Returned Information: 0 00:29:18.720 Explicit Persistent Connection Support for Discovery: 0 00:29:18.720 Transport Requirements: 00:29:18.720 Secure Channel: Not Required 00:29:18.720 Port ID: 0 (0x0000) 00:29:18.720 Controller ID: 65535 (0xffff) 00:29:18.720 Admin Max SQ Size: 128 00:29:18.720 Transport Service Identifier: 4420 00:29:18.720 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:18.720 Transport Address: 10.0.0.2 [2024-07-13 00:54:30.025335] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:18.720 [2024-07-13 00:54:30.025346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5340) on tqpair=0x2048af0 00:29:18.720 [2024-07-13 00:54:30.025352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.720 [2024-07-13 00:54:30.025357] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b54c0) on tqpair=0x2048af0 00:29:18.720 [2024-07-13 00:54:30.025361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.720 [2024-07-13 00:54:30.025366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b5640) on tqpair=0x2048af0 00:29:18.720 [2024-07-13 00:54:30.025370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.720 [2024-07-13 00:54:30.025374] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.720 [2024-07-13 00:54:30.025378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.720 [2024-07-13 00:54:30.025388] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.720 [2024-07-13 00:54:30.025401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.720 [2024-07-13 00:54:30.025414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.720 [2024-07-13 00:54:30.025480] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.720 [2024-07-13 00:54:30.025486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.720 [2024-07-13 00:54:30.025489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.720 [2024-07-13 00:54:30.025499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025502] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025506] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.720 [2024-07-13 00:54:30.025512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.720 [2024-07-13 00:54:30.025523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.720 [2024-07-13 00:54:30.025610] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.720 [2024-07-13 00:54:30.025616] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.720 [2024-07-13 00:54:30.025618] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.720 [2024-07-13 00:54:30.025626] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:18.720 [2024-07-13 00:54:30.025630] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:18.720 [2024-07-13 00:54:30.025638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.720 [2024-07-13 00:54:30.025652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.720 [2024-07-13 00:54:30.025661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.720 [2024-07-13 00:54:30.025731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.720 [2024-07-13 00:54:30.025736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.720 [2024-07-13 00:54:30.025739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.720 [2024-07-13 00:54:30.025751] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025755] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025758] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.720 [2024-07-13 00:54:30.025764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.720 [2024-07-13 00:54:30.025773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.720 [2024-07-13 00:54:30.025839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.720 [2024-07-13 00:54:30.025845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.720 [2024-07-13 00:54:30.025848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.720 [2024-07-13 00:54:30.025859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025862] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.720 [2024-07-13 00:54:30.025871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.720 [2024-07-13 00:54:30.025880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.720 [2024-07-13 00:54:30.025950] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.720 [2024-07-13 00:54:30.025956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.720 [2024-07-13 00:54:30.025959] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.720 [2024-07-13 00:54:30.025970] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.025977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.720 [2024-07-13 00:54:30.025982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.720 [2024-07-13 00:54:30.025991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.720 [2024-07-13 00:54:30.026057] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.720 [2024-07-13 00:54:30.026063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.720 [2024-07-13 00:54:30.026066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.026070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.720 [2024-07-13 00:54:30.026078] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.026081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.026084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.720 [2024-07-13 00:54:30.026091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.720 [2024-07-13 00:54:30.026100] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.720 [2024-07-13 00:54:30.026172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.720 [2024-07-13 00:54:30.026178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.720 [2024-07-13 00:54:30.026181] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.026185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.720 [2024-07-13 00:54:30.026193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.026198] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.026201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.720 [2024-07-13 00:54:30.026206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.720 [2024-07-13 00:54:30.026216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.720 [2024-07-13 00:54:30.026285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.720 [2024-07-13 00:54:30.026291] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.720 [2024-07-13 00:54:30.026294] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.026298] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.720 [2024-07-13 00:54:30.026306] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.026309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.026312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.720 [2024-07-13 00:54:30.026318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.720 [2024-07-13 00:54:30.026344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.720 [2024-07-13 00:54:30.026408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.720 [2024-07-13 00:54:30.026414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.720 [2024-07-13 00:54:30.026417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.720 [2024-07-13 00:54:30.026421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.720 [2024-07-13 00:54:30.026429] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.026432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.026436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.721 [2024-07-13 00:54:30.026441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.721 [2024-07-13 00:54:30.026451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.721 [2024-07-13 00:54:30.026520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.721 [2024-07-13 00:54:30.026526] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.721 [2024-07-13 00:54:30.026529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.026533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.721 [2024-07-13 00:54:30.026541] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.026545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.026548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.721 [2024-07-13 00:54:30.026557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.721 [2024-07-13 00:54:30.026567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.721 [2024-07-13 00:54:30.026649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.721 [2024-07-13 00:54:30.026655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.721 [2024-07-13 00:54:30.026658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.026661] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.721 [2024-07-13 00:54:30.026670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.026674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.026677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.721 [2024-07-13 00:54:30.026683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.721 [2024-07-13 00:54:30.026693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.721 [2024-07-13 00:54:30.026768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.721 [2024-07-13 00:54:30.026774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.721 [2024-07-13 00:54:30.026777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.026780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.721 [2024-07-13 00:54:30.026789] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.026793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.026796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.721 [2024-07-13 00:54:30.026802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.721 [2024-07-13 00:54:30.026812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.721 [2024-07-13 00:54:30.026879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.721 [2024-07-13 00:54:30.026885] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.721 [2024-07-13 00:54:30.026888] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.026891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.721 [2024-07-13 00:54:30.026900] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.026903] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.026906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.721 [2024-07-13 00:54:30.026912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.721 [2024-07-13 00:54:30.026922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.721 [2024-07-13 00:54:30.027007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.721 [2024-07-13 00:54:30.027013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.721 [2024-07-13 00:54:30.027017] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.721 [2024-07-13 00:54:30.027029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027033] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.721 [2024-07-13 00:54:30.027041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.721 [2024-07-13 00:54:30.027053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.721 [2024-07-13 00:54:30.027133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.721 [2024-07-13 00:54:30.027139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.721 [2024-07-13 00:54:30.027142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.721 [2024-07-13 00:54:30.027154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027157] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027160] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.721 [2024-07-13 00:54:30.027166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.721 [2024-07-13 00:54:30.027175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.721 [2024-07-13 00:54:30.027251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.721 [2024-07-13 00:54:30.027257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.721 [2024-07-13 00:54:30.027261] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.721 [2024-07-13 00:54:30.027272] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027276] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.721 [2024-07-13 00:54:30.027285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.721 [2024-07-13 00:54:30.027294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.721 [2024-07-13 00:54:30.027402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.721 [2024-07-13 00:54:30.027408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.721 [2024-07-13 00:54:30.027411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.721 [2024-07-13 00:54:30.027422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.721 [2024-07-13 00:54:30.027435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.721 [2024-07-13 00:54:30.027444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.721 [2024-07-13 00:54:30.027511] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.721 [2024-07-13 00:54:30.027517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.721 [2024-07-13 00:54:30.027520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.721 [2024-07-13 00:54:30.027532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.721 [2024-07-13 00:54:30.027545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.721 [2024-07-13 00:54:30.027554] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.721 [2024-07-13 00:54:30.027637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.721 [2024-07-13 00:54:30.027643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.721 [2024-07-13 00:54:30.027646] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027650] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.721 [2024-07-13 00:54:30.027658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027662] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027665] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.721 [2024-07-13 00:54:30.027670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.721 [2024-07-13 00:54:30.027680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.721 [2024-07-13 00:54:30.027748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.721 [2024-07-13 00:54:30.027754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.721 [2024-07-13 00:54:30.027758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.721 [2024-07-13 00:54:30.027769] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027773] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.721 [2024-07-13 00:54:30.027782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.721 [2024-07-13 00:54:30.027791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.721 [2024-07-13 00:54:30.027862] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.721 [2024-07-13 00:54:30.027868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.721 [2024-07-13 00:54:30.027871] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.721 [2024-07-13 00:54:30.027883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.721 [2024-07-13 00:54:30.027890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.721 [2024-07-13 00:54:30.027895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.721 [2024-07-13 00:54:30.027905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.721 [2024-07-13 00:54:30.027977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.721 [2024-07-13 00:54:30.027983] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.721 [2024-07-13 00:54:30.027986] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.027989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.722 [2024-07-13 00:54:30.027997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028005] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.722 [2024-07-13 00:54:30.028010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.722 [2024-07-13 00:54:30.028020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.722 [2024-07-13 00:54:30.028098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.722 [2024-07-13 00:54:30.028106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.722 [2024-07-13 00:54:30.028110] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.722 [2024-07-13 00:54:30.028121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028125] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028128] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.722 [2024-07-13 00:54:30.028134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.722 [2024-07-13 00:54:30.028143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.722 [2024-07-13 00:54:30.028223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.722 [2024-07-13 00:54:30.028239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.722 [2024-07-13 00:54:30.028246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.722 [2024-07-13 00:54:30.028272] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.722 [2024-07-13 00:54:30.028317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.722 [2024-07-13 00:54:30.028341] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.722 [2024-07-13 00:54:30.028461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.722 [2024-07-13 00:54:30.028481] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.722 [2024-07-13 00:54:30.028509] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.722 [2024-07-13 00:54:30.028566] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028587] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.722 [2024-07-13 00:54:30.028626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.722 [2024-07-13 00:54:30.028638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.722 [2024-07-13 00:54:30.028724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.722 [2024-07-13 00:54:30.028734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.722 [2024-07-13 00:54:30.028738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.722 [2024-07-13 00:54:30.028757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.722 [2024-07-13 00:54:30.028778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.722 [2024-07-13 00:54:30.028794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.722 [2024-07-13 00:54:30.028866] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.722 [2024-07-13 00:54:30.028874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.722 [2024-07-13 00:54:30.028880] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028884] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.722 [2024-07-13 00:54:30.028894] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.722 [2024-07-13 00:54:30.028908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.722 [2024-07-13 00:54:30.028919] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.722 [2024-07-13 00:54:30.028982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.722 [2024-07-13 00:54:30.028989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.722 [2024-07-13 00:54:30.028992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.028995] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.722 [2024-07-13 00:54:30.029004] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.029008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.029011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.722 [2024-07-13 00:54:30.029017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.722 [2024-07-13 00:54:30.029027] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.722 [2024-07-13 00:54:30.029095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.722 [2024-07-13 00:54:30.029101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.722 [2024-07-13 00:54:30.029104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.029107] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.722 [2024-07-13 00:54:30.029117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.029121] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.029125] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.722 [2024-07-13 00:54:30.029131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.722 [2024-07-13 00:54:30.029141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.722 [2024-07-13 00:54:30.029206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.722 [2024-07-13 00:54:30.029212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.722 [2024-07-13 00:54:30.029216] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.029220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.722 [2024-07-13 00:54:30.033238] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.033245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.033248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2048af0) 00:29:18.722 [2024-07-13 00:54:30.033254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.722 [2024-07-13 00:54:30.033266] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20b57c0, cid 3, qid 0 00:29:18.722 [2024-07-13 00:54:30.033329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.722 [2024-07-13 00:54:30.033335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.722 [2024-07-13 00:54:30.033339] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.033346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20b57c0) on tqpair=0x2048af0 00:29:18.722 [2024-07-13 00:54:30.033352] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:29:18.722 00:29:18.722 00:54:30 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:18.722 [2024-07-13 00:54:30.070533] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:18.722 [2024-07-13 00:54:30.070580] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528308 ] 00:29:18.722 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.722 [2024-07-13 00:54:30.100553] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:18.722 [2024-07-13 00:54:30.100591] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:18.722 [2024-07-13 00:54:30.100596] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:18.722 [2024-07-13 00:54:30.100604] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:18.722 [2024-07-13 00:54:30.100609] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:18.722 [2024-07-13 00:54:30.100834] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:18.722 [2024-07-13 00:54:30.100856] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x661af0 0 00:29:18.722 [2024-07-13 00:54:30.114233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:18.722 [2024-07-13 00:54:30.114243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:18.722 [2024-07-13 00:54:30.114247] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:18.722 [2024-07-13 00:54:30.114250] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:18.722 [2024-07-13 00:54:30.114275] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.114280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.114283] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x661af0) 00:29:18.722 [2024-07-13 00:54:30.114292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:18.722 [2024-07-13 00:54:30.114306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce340, cid 0, qid 0 00:29:18.722 [2024-07-13 00:54:30.122234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.722 [2024-07-13 00:54:30.122244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.722 [2024-07-13 00:54:30.122247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.722 [2024-07-13 00:54:30.122250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce340) on tqpair=0x661af0 00:29:18.722 [2024-07-13 00:54:30.122260] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:18.722 [2024-07-13 00:54:30.122266] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:18.722 [2024-07-13 00:54:30.122271] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:18.723 [2024-07-13 00:54:30.122281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.122285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.122290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x661af0) 00:29:18.723 [2024-07-13 00:54:30.122297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.723 [2024-07-13 00:54:30.122310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce340, cid 0, qid 0 00:29:18.723 [2024-07-13 00:54:30.122392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.723 [2024-07-13 00:54:30.122397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.723 [2024-07-13 00:54:30.122400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.122404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce340) on tqpair=0x661af0 00:29:18.723 [2024-07-13 00:54:30.122408] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:18.723 [2024-07-13 00:54:30.122414] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:18.723 [2024-07-13 00:54:30.122420] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.122424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.122427] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x661af0) 00:29:18.723 [2024-07-13 00:54:30.122432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.723 [2024-07-13 00:54:30.122442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce340, cid 0, qid 0 00:29:18.723 [2024-07-13 00:54:30.122507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.723 [2024-07-13 00:54:30.122513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.723 [2024-07-13 00:54:30.122516] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.122519] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce340) on tqpair=0x661af0 00:29:18.723 [2024-07-13 00:54:30.122524] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:18.723 [2024-07-13 00:54:30.122530] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:18.723 [2024-07-13 00:54:30.122536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.122539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.122542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x661af0) 00:29:18.723 [2024-07-13 00:54:30.122547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.723 [2024-07-13 00:54:30.122557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce340, cid 0, qid 0 00:29:18.723 [2024-07-13 00:54:30.122622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.723 [2024-07-13 00:54:30.122628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.723 [2024-07-13 00:54:30.122631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.122634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce340) on tqpair=0x661af0 00:29:18.723 [2024-07-13 00:54:30.122638] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:18.723 [2024-07-13 00:54:30.122646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.122650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.122653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x661af0) 00:29:18.723 [2024-07-13 00:54:30.122658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.723 [2024-07-13 00:54:30.122669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce340, cid 0, qid 0 00:29:18.723 [2024-07-13 00:54:30.122732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.723 [2024-07-13 00:54:30.122738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.723 [2024-07-13 00:54:30.122741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.122744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce340) on tqpair=0x661af0 00:29:18.723 [2024-07-13 00:54:30.122747] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:18.723 [2024-07-13 00:54:30.122752] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:18.723 [2024-07-13 00:54:30.122758] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:18.723 [2024-07-13 00:54:30.122863] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:18.723 [2024-07-13 00:54:30.122867] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:18.723 [2024-07-13 00:54:30.122874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.122877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.122881] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x661af0) 00:29:18.723 [2024-07-13 00:54:30.122886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.723 [2024-07-13 00:54:30.122896] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce340, cid 0, qid 0 00:29:18.723 [2024-07-13 00:54:30.122986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.723 [2024-07-13 00:54:30.122991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.723 [2024-07-13 00:54:30.122994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.122997] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce340) on tqpair=0x661af0 00:29:18.723 [2024-07-13 00:54:30.123001] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:18.723 [2024-07-13 00:54:30.123010] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.123013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.123016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x661af0) 00:29:18.723 [2024-07-13 00:54:30.123022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.723 [2024-07-13 00:54:30.123031] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce340, cid 0, qid 0 00:29:18.723 [2024-07-13 00:54:30.123097] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.723 [2024-07-13 00:54:30.123103] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.723 [2024-07-13 00:54:30.123106] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.123109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce340) on tqpair=0x661af0 00:29:18.723 [2024-07-13 00:54:30.123113] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:18.723 [2024-07-13 00:54:30.123117] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:18.723 [2024-07-13 00:54:30.123123] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:18.723 [2024-07-13 00:54:30.123130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:18.723 [2024-07-13 00:54:30.123138] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.123141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x661af0) 00:29:18.723 [2024-07-13 00:54:30.123147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.723 [2024-07-13 00:54:30.123156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce340, cid 0, qid 0 00:29:18.723 [2024-07-13 00:54:30.123288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:18.723 [2024-07-13 00:54:30.123294] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:18.723 [2024-07-13 00:54:30.123297] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.123300] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x661af0): datao=0, datal=4096, cccid=0 00:29:18.723 [2024-07-13 00:54:30.123304] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6ce340) on tqpair(0x661af0): expected_datao=0, payload_size=4096 00:29:18.723 [2024-07-13 00:54:30.123308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.123313] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.123317] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.123330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.723 [2024-07-13 00:54:30.123335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.723 [2024-07-13 00:54:30.123338] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.723 [2024-07-13 00:54:30.123341] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce340) on tqpair=0x661af0 00:29:18.724 [2024-07-13 00:54:30.123347] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:18.724 [2024-07-13 00:54:30.123353] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:18.724 [2024-07-13 00:54:30.123357] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:18.724 [2024-07-13 00:54:30.123361] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:18.724 [2024-07-13 00:54:30.123365] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:18.724 [2024-07-13 00:54:30.123369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:18.724 [2024-07-13 00:54:30.123376] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:18.724 [2024-07-13 00:54:30.123382] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123388] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x661af0) 00:29:18.724 [2024-07-13 00:54:30.123393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:18.724 [2024-07-13 00:54:30.123404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce340, cid 0, qid 0 00:29:18.724 [2024-07-13 00:54:30.123474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.724 [2024-07-13 00:54:30.123480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.724 [2024-07-13 00:54:30.123483] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce340) on tqpair=0x661af0 00:29:18.724 [2024-07-13 00:54:30.123493] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123496] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x661af0) 00:29:18.724 [2024-07-13 00:54:30.123505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.724 [2024-07-13 00:54:30.123511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123517] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x661af0) 00:29:18.724 [2024-07-13 00:54:30.123522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.724 [2024-07-13 00:54:30.123527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123530] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x661af0) 00:29:18.724 [2024-07-13 00:54:30.123538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.724 [2024-07-13 00:54:30.123542] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.724 [2024-07-13 00:54:30.123554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.724 [2024-07-13 00:54:30.123557] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:18.724 [2024-07-13 00:54:30.123567] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:18.724 [2024-07-13 00:54:30.123573] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x661af0) 00:29:18.724 [2024-07-13 00:54:30.123581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.724 [2024-07-13 00:54:30.123592] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce340, cid 0, qid 0 00:29:18.724 [2024-07-13 00:54:30.123596] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce4c0, cid 1, qid 0 00:29:18.724 [2024-07-13 00:54:30.123600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce640, cid 2, qid 0 00:29:18.724 [2024-07-13 00:54:30.123604] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.724 [2024-07-13 00:54:30.123608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce940, cid 4, qid 0 00:29:18.724 [2024-07-13 00:54:30.123710] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.724 [2024-07-13 00:54:30.123715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.724 [2024-07-13 00:54:30.123718] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce940) on tqpair=0x661af0 00:29:18.724 [2024-07-13 00:54:30.123726] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:18.724 [2024-07-13 00:54:30.123730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:18.724 [2024-07-13 00:54:30.123736] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:18.724 [2024-07-13 00:54:30.123742] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:18.724 [2024-07-13 00:54:30.123749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x661af0) 00:29:18.724 [2024-07-13 00:54:30.123761] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:18.724 [2024-07-13 00:54:30.123770] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce940, cid 4, qid 0 00:29:18.724 [2024-07-13 00:54:30.123833] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.724 [2024-07-13 00:54:30.123839] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.724 [2024-07-13 00:54:30.123842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce940) on tqpair=0x661af0 00:29:18.724 [2024-07-13 00:54:30.123894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:18.724 [2024-07-13 00:54:30.123902] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:18.724 [2024-07-13 00:54:30.123909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.123912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x661af0) 00:29:18.724 [2024-07-13 00:54:30.123918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.724 [2024-07-13 00:54:30.123927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce940, cid 4, qid 0 00:29:18.724 [2024-07-13 00:54:30.124007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:18.724 [2024-07-13 00:54:30.124013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:18.724 [2024-07-13 00:54:30.124016] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.124020] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x661af0): datao=0, datal=4096, cccid=4 00:29:18.724 [2024-07-13 00:54:30.124023] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6ce940) on tqpair(0x661af0): expected_datao=0, payload_size=4096 00:29:18.724 [2024-07-13 00:54:30.124027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.124041] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.124044] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.164342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.724 [2024-07-13 00:54:30.164352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.724 [2024-07-13 00:54:30.164355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.164358] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce940) on tqpair=0x661af0 00:29:18.724 [2024-07-13 00:54:30.164366] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:18.724 [2024-07-13 00:54:30.164376] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:18.724 [2024-07-13 00:54:30.164384] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:18.724 [2024-07-13 00:54:30.164390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.164393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x661af0) 00:29:18.724 [2024-07-13 00:54:30.164400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.724 [2024-07-13 00:54:30.164412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce940, cid 4, qid 0 00:29:18.724 [2024-07-13 00:54:30.164494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:18.724 [2024-07-13 00:54:30.164500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:18.724 [2024-07-13 00:54:30.164503] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.164506] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x661af0): datao=0, datal=4096, cccid=4 00:29:18.724 [2024-07-13 00:54:30.164509] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6ce940) on tqpair(0x661af0): expected_datao=0, payload_size=4096 00:29:18.724 [2024-07-13 00:54:30.164513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.164527] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.164531] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.206361] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.724 [2024-07-13 00:54:30.206374] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.724 [2024-07-13 00:54:30.206377] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.206380] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce940) on tqpair=0x661af0 00:29:18.724 [2024-07-13 00:54:30.206392] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:18.724 [2024-07-13 00:54:30.206401] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:18.724 [2024-07-13 00:54:30.206408] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.206411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x661af0) 00:29:18.724 [2024-07-13 00:54:30.206417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.724 [2024-07-13 00:54:30.206429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce940, cid 4, qid 0 00:29:18.724 [2024-07-13 00:54:30.206504] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:18.724 [2024-07-13 00:54:30.206510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:18.724 [2024-07-13 00:54:30.206513] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:18.724 [2024-07-13 00:54:30.206516] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x661af0): datao=0, datal=4096, cccid=4 00:29:18.725 [2024-07-13 00:54:30.206520] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6ce940) on tqpair(0x661af0): expected_datao=0, payload_size=4096 00:29:18.725 [2024-07-13 00:54:30.206524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.206538] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.206542] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.725 [2024-07-13 00:54:30.251245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.725 [2024-07-13 00:54:30.251248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce940) on tqpair=0x661af0 00:29:18.725 [2024-07-13 00:54:30.251258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:18.725 [2024-07-13 00:54:30.251266] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:18.725 [2024-07-13 00:54:30.251274] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:18.725 [2024-07-13 00:54:30.251280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:18.725 [2024-07-13 00:54:30.251286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:18.725 [2024-07-13 00:54:30.251290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:18.725 [2024-07-13 00:54:30.251295] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:18.725 [2024-07-13 00:54:30.251299] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:18.725 [2024-07-13 00:54:30.251303] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:18.725 [2024-07-13 00:54:30.251314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x661af0) 00:29:18.725 [2024-07-13 00:54:30.251324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.725 [2024-07-13 00:54:30.251330] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x661af0) 00:29:18.725 [2024-07-13 00:54:30.251341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.725 [2024-07-13 00:54:30.251355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce940, cid 4, qid 0 00:29:18.725 [2024-07-13 00:54:30.251360] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ceac0, cid 5, qid 0 00:29:18.725 [2024-07-13 00:54:30.251437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.725 [2024-07-13 00:54:30.251443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.725 [2024-07-13 00:54:30.251446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce940) on tqpair=0x661af0 00:29:18.725 [2024-07-13 00:54:30.251455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.725 [2024-07-13 00:54:30.251460] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.725 [2024-07-13 00:54:30.251462] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ceac0) on tqpair=0x661af0 00:29:18.725 [2024-07-13 00:54:30.251473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251476] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x661af0) 00:29:18.725 [2024-07-13 00:54:30.251482] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.725 [2024-07-13 00:54:30.251491] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ceac0, cid 5, qid 0 00:29:18.725 [2024-07-13 00:54:30.251558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.725 [2024-07-13 00:54:30.251564] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.725 [2024-07-13 00:54:30.251567] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251570] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ceac0) on tqpair=0x661af0 00:29:18.725 [2024-07-13 00:54:30.251578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x661af0) 00:29:18.725 [2024-07-13 00:54:30.251587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.725 [2024-07-13 00:54:30.251595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ceac0, cid 5, qid 0 00:29:18.725 [2024-07-13 00:54:30.251676] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.725 [2024-07-13 00:54:30.251682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.725 [2024-07-13 00:54:30.251685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251688] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ceac0) on tqpair=0x661af0 00:29:18.725 [2024-07-13 00:54:30.251695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x661af0) 00:29:18.725 [2024-07-13 00:54:30.251704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.725 [2024-07-13 00:54:30.251714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ceac0, cid 5, qid 0 00:29:18.725 [2024-07-13 00:54:30.251793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.725 [2024-07-13 00:54:30.251799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.725 [2024-07-13 00:54:30.251802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ceac0) on tqpair=0x661af0 00:29:18.725 [2024-07-13 00:54:30.251817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x661af0) 00:29:18.725 [2024-07-13 00:54:30.251827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.725 [2024-07-13 00:54:30.251833] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251836] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x661af0) 00:29:18.725 [2024-07-13 00:54:30.251841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.725 [2024-07-13 00:54:30.251847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x661af0) 00:29:18.725 [2024-07-13 00:54:30.251855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.725 [2024-07-13 00:54:30.251861] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.251864] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x661af0) 00:29:18.725 [2024-07-13 00:54:30.251869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.725 [2024-07-13 00:54:30.251879] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ceac0, cid 5, qid 0 00:29:18.725 [2024-07-13 00:54:30.251884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce940, cid 4, qid 0 00:29:18.725 [2024-07-13 00:54:30.251888] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cec40, cid 6, qid 0 00:29:18.725 [2024-07-13 00:54:30.251892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cedc0, cid 7, qid 0 00:29:18.725 [2024-07-13 00:54:30.252073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:18.725 [2024-07-13 00:54:30.252079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:18.725 [2024-07-13 00:54:30.252082] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252085] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x661af0): datao=0, datal=8192, cccid=5 00:29:18.725 [2024-07-13 00:54:30.252088] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6ceac0) on tqpair(0x661af0): expected_datao=0, payload_size=8192 00:29:18.725 [2024-07-13 00:54:30.252094] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252107] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252111] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:18.725 [2024-07-13 00:54:30.252120] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:18.725 [2024-07-13 00:54:30.252123] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252126] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x661af0): datao=0, datal=512, cccid=4 00:29:18.725 [2024-07-13 00:54:30.252130] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6ce940) on tqpair(0x661af0): expected_datao=0, payload_size=512 00:29:18.725 [2024-07-13 00:54:30.252134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252139] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252142] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:18.725 [2024-07-13 00:54:30.252151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:18.725 [2024-07-13 00:54:30.252154] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252157] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x661af0): datao=0, datal=512, cccid=6 00:29:18.725 [2024-07-13 00:54:30.252161] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cec40) on tqpair(0x661af0): expected_datao=0, payload_size=512 00:29:18.725 [2024-07-13 00:54:30.252164] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252169] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252172] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:18.725 [2024-07-13 00:54:30.252182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:18.725 [2024-07-13 00:54:30.252185] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252187] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x661af0): datao=0, datal=4096, cccid=7 00:29:18.725 [2024-07-13 00:54:30.252191] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cedc0) on tqpair(0x661af0): expected_datao=0, payload_size=4096 00:29:18.725 [2024-07-13 00:54:30.252195] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252200] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252203] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:18.725 [2024-07-13 00:54:30.252211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.726 [2024-07-13 00:54:30.252215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.726 [2024-07-13 00:54:30.252218] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.726 [2024-07-13 00:54:30.252222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ceac0) on tqpair=0x661af0 00:29:18.726 [2024-07-13 00:54:30.252237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.726 [2024-07-13 00:54:30.252242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.726 [2024-07-13 00:54:30.252245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.726 [2024-07-13 00:54:30.252248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce940) on tqpair=0x661af0 00:29:18.726 [2024-07-13 00:54:30.252257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.726 [2024-07-13 00:54:30.252262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.726 [2024-07-13 00:54:30.252265] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.726 [2024-07-13 00:54:30.252268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cec40) on tqpair=0x661af0 00:29:18.726 [2024-07-13 00:54:30.252275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.726 [2024-07-13 00:54:30.252280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.726 [2024-07-13 00:54:30.252282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.726 [2024-07-13 00:54:30.252286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cedc0) on tqpair=0x661af0 00:29:18.726 ===================================================== 00:29:18.726 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.726 ===================================================== 00:29:18.726 Controller Capabilities/Features 00:29:18.726 ================================ 00:29:18.726 Vendor ID: 8086 00:29:18.726 Subsystem Vendor ID: 8086 00:29:18.726 Serial Number: SPDK00000000000001 00:29:18.726 Model Number: SPDK bdev Controller 00:29:18.726 Firmware Version: 24.09 00:29:18.726 Recommended Arb Burst: 6 00:29:18.726 IEEE OUI Identifier: e4 d2 5c 00:29:18.726 Multi-path I/O 00:29:18.726 May have multiple subsystem ports: Yes 00:29:18.726 May have multiple controllers: Yes 00:29:18.726 Associated with SR-IOV VF: No 00:29:18.726 Max Data Transfer Size: 131072 00:29:18.726 Max Number of Namespaces: 32 00:29:18.726 Max Number of I/O Queues: 127 00:29:18.726 NVMe Specification Version (VS): 1.3 00:29:18.726 NVMe Specification Version (Identify): 1.3 00:29:18.726 Maximum Queue Entries: 128 00:29:18.726 Contiguous Queues Required: Yes 00:29:18.726 Arbitration Mechanisms Supported 00:29:18.726 Weighted Round Robin: Not Supported 00:29:18.726 Vendor Specific: Not Supported 00:29:18.726 Reset Timeout: 15000 ms 00:29:18.726 Doorbell Stride: 4 bytes 00:29:18.726 NVM Subsystem Reset: Not Supported 00:29:18.726 Command Sets Supported 00:29:18.726 NVM Command Set: Supported 00:29:18.726 Boot Partition: Not Supported 00:29:18.726 Memory Page Size Minimum: 4096 bytes 00:29:18.726 Memory Page Size Maximum: 4096 bytes 00:29:18.726 Persistent Memory Region: Not Supported 00:29:18.726 Optional Asynchronous Events Supported 00:29:18.726 Namespace Attribute Notices: Supported 00:29:18.726 Firmware Activation Notices: Not Supported 00:29:18.726 ANA Change Notices: Not Supported 00:29:18.726 PLE Aggregate Log Change Notices: Not Supported 00:29:18.726 LBA Status Info Alert Notices: Not Supported 00:29:18.726 EGE Aggregate Log Change Notices: Not Supported 00:29:18.726 Normal NVM Subsystem Shutdown event: Not Supported 00:29:18.726 Zone Descriptor Change Notices: Not Supported 00:29:18.726 Discovery Log Change Notices: Not Supported 00:29:18.726 Controller Attributes 00:29:18.726 128-bit Host Identifier: Supported 00:29:18.726 Non-Operational Permissive Mode: Not Supported 00:29:18.726 NVM Sets: Not Supported 00:29:18.726 Read Recovery Levels: Not Supported 00:29:18.726 Endurance Groups: Not Supported 00:29:18.726 Predictable Latency Mode: Not Supported 00:29:18.726 Traffic Based Keep ALive: Not Supported 00:29:18.726 Namespace Granularity: Not Supported 00:29:18.726 SQ Associations: Not Supported 00:29:18.726 UUID List: Not Supported 00:29:18.726 Multi-Domain Subsystem: Not Supported 00:29:18.726 Fixed Capacity Management: Not Supported 00:29:18.726 Variable Capacity Management: Not Supported 00:29:18.726 Delete Endurance Group: Not Supported 00:29:18.726 Delete NVM Set: Not Supported 00:29:18.726 Extended LBA Formats Supported: Not Supported 00:29:18.726 Flexible Data Placement Supported: Not Supported 00:29:18.726 00:29:18.726 Controller Memory Buffer Support 00:29:18.726 ================================ 00:29:18.726 Supported: No 00:29:18.726 00:29:18.726 Persistent Memory Region Support 00:29:18.726 ================================ 00:29:18.726 Supported: No 00:29:18.726 00:29:18.726 Admin Command Set Attributes 00:29:18.726 ============================ 00:29:18.726 Security Send/Receive: Not Supported 00:29:18.726 Format NVM: Not Supported 00:29:18.726 Firmware Activate/Download: Not Supported 00:29:18.726 Namespace Management: Not Supported 00:29:18.726 Device Self-Test: Not Supported 00:29:18.726 Directives: Not Supported 00:29:18.726 NVMe-MI: Not Supported 00:29:18.726 Virtualization Management: Not Supported 00:29:18.726 Doorbell Buffer Config: Not Supported 00:29:18.726 Get LBA Status Capability: Not Supported 00:29:18.726 Command & Feature Lockdown Capability: Not Supported 00:29:18.726 Abort Command Limit: 4 00:29:18.726 Async Event Request Limit: 4 00:29:18.726 Number of Firmware Slots: N/A 00:29:18.726 Firmware Slot 1 Read-Only: N/A 00:29:18.726 Firmware Activation Without Reset: N/A 00:29:18.726 Multiple Update Detection Support: N/A 00:29:18.726 Firmware Update Granularity: No Information Provided 00:29:18.726 Per-Namespace SMART Log: No 00:29:18.726 Asymmetric Namespace Access Log Page: Not Supported 00:29:18.726 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:18.726 Command Effects Log Page: Supported 00:29:18.726 Get Log Page Extended Data: Supported 00:29:18.726 Telemetry Log Pages: Not Supported 00:29:18.726 Persistent Event Log Pages: Not Supported 00:29:18.726 Supported Log Pages Log Page: May Support 00:29:18.726 Commands Supported & Effects Log Page: Not Supported 00:29:18.726 Feature Identifiers & Effects Log Page:May Support 00:29:18.726 NVMe-MI Commands & Effects Log Page: May Support 00:29:18.726 Data Area 4 for Telemetry Log: Not Supported 00:29:18.726 Error Log Page Entries Supported: 128 00:29:18.726 Keep Alive: Supported 00:29:18.726 Keep Alive Granularity: 10000 ms 00:29:18.726 00:29:18.726 NVM Command Set Attributes 00:29:18.726 ========================== 00:29:18.726 Submission Queue Entry Size 00:29:18.726 Max: 64 00:29:18.726 Min: 64 00:29:18.726 Completion Queue Entry Size 00:29:18.726 Max: 16 00:29:18.726 Min: 16 00:29:18.726 Number of Namespaces: 32 00:29:18.726 Compare Command: Supported 00:29:18.726 Write Uncorrectable Command: Not Supported 00:29:18.726 Dataset Management Command: Supported 00:29:18.726 Write Zeroes Command: Supported 00:29:18.726 Set Features Save Field: Not Supported 00:29:18.726 Reservations: Supported 00:29:18.726 Timestamp: Not Supported 00:29:18.726 Copy: Supported 00:29:18.726 Volatile Write Cache: Present 00:29:18.726 Atomic Write Unit (Normal): 1 00:29:18.726 Atomic Write Unit (PFail): 1 00:29:18.726 Atomic Compare & Write Unit: 1 00:29:18.726 Fused Compare & Write: Supported 00:29:18.726 Scatter-Gather List 00:29:18.726 SGL Command Set: Supported 00:29:18.726 SGL Keyed: Supported 00:29:18.726 SGL Bit Bucket Descriptor: Not Supported 00:29:18.726 SGL Metadata Pointer: Not Supported 00:29:18.726 Oversized SGL: Not Supported 00:29:18.726 SGL Metadata Address: Not Supported 00:29:18.726 SGL Offset: Supported 00:29:18.726 Transport SGL Data Block: Not Supported 00:29:18.726 Replay Protected Memory Block: Not Supported 00:29:18.726 00:29:18.726 Firmware Slot Information 00:29:18.726 ========================= 00:29:18.726 Active slot: 1 00:29:18.726 Slot 1 Firmware Revision: 24.09 00:29:18.726 00:29:18.726 00:29:18.726 Commands Supported and Effects 00:29:18.726 ============================== 00:29:18.726 Admin Commands 00:29:18.726 -------------- 00:29:18.726 Get Log Page (02h): Supported 00:29:18.726 Identify (06h): Supported 00:29:18.726 Abort (08h): Supported 00:29:18.726 Set Features (09h): Supported 00:29:18.726 Get Features (0Ah): Supported 00:29:18.726 Asynchronous Event Request (0Ch): Supported 00:29:18.726 Keep Alive (18h): Supported 00:29:18.726 I/O Commands 00:29:18.726 ------------ 00:29:18.726 Flush (00h): Supported LBA-Change 00:29:18.726 Write (01h): Supported LBA-Change 00:29:18.726 Read (02h): Supported 00:29:18.726 Compare (05h): Supported 00:29:18.726 Write Zeroes (08h): Supported LBA-Change 00:29:18.726 Dataset Management (09h): Supported LBA-Change 00:29:18.726 Copy (19h): Supported LBA-Change 00:29:18.726 00:29:18.726 Error Log 00:29:18.726 ========= 00:29:18.726 00:29:18.726 Arbitration 00:29:18.726 =========== 00:29:18.726 Arbitration Burst: 1 00:29:18.726 00:29:18.726 Power Management 00:29:18.726 ================ 00:29:18.726 Number of Power States: 1 00:29:18.726 Current Power State: Power State #0 00:29:18.726 Power State #0: 00:29:18.726 Max Power: 0.00 W 00:29:18.726 Non-Operational State: Operational 00:29:18.726 Entry Latency: Not Reported 00:29:18.726 Exit Latency: Not Reported 00:29:18.726 Relative Read Throughput: 0 00:29:18.726 Relative Read Latency: 0 00:29:18.727 Relative Write Throughput: 0 00:29:18.727 Relative Write Latency: 0 00:29:18.727 Idle Power: Not Reported 00:29:18.727 Active Power: Not Reported 00:29:18.727 Non-Operational Permissive Mode: Not Supported 00:29:18.727 00:29:18.727 Health Information 00:29:18.727 ================== 00:29:18.727 Critical Warnings: 00:29:18.727 Available Spare Space: OK 00:29:18.727 Temperature: OK 00:29:18.727 Device Reliability: OK 00:29:18.727 Read Only: No 00:29:18.727 Volatile Memory Backup: OK 00:29:18.727 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:18.727 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:18.727 Available Spare: 0% 00:29:18.727 Available Spare Threshold: 0% 00:29:18.727 Life Percentage Used:[2024-07-13 00:54:30.252370] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.252374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x661af0) 00:29:18.727 [2024-07-13 00:54:30.252380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.727 [2024-07-13 00:54:30.252391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cedc0, cid 7, qid 0 00:29:18.727 [2024-07-13 00:54:30.252475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.727 [2024-07-13 00:54:30.252481] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.727 [2024-07-13 00:54:30.252484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.252487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cedc0) on tqpair=0x661af0 00:29:18.727 [2024-07-13 00:54:30.252515] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:18.727 [2024-07-13 00:54:30.252524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce340) on tqpair=0x661af0 00:29:18.727 [2024-07-13 00:54:30.252529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.727 [2024-07-13 00:54:30.252534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce4c0) on tqpair=0x661af0 00:29:18.727 [2024-07-13 00:54:30.252537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.727 [2024-07-13 00:54:30.252541] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce640) on tqpair=0x661af0 00:29:18.727 [2024-07-13 00:54:30.252545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.727 [2024-07-13 00:54:30.252550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.727 [2024-07-13 00:54:30.252553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.727 [2024-07-13 00:54:30.252560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.252563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.252566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.727 [2024-07-13 00:54:30.252572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.727 [2024-07-13 00:54:30.252582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.727 [2024-07-13 00:54:30.252646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.727 [2024-07-13 00:54:30.252651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.727 [2024-07-13 00:54:30.252654] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.252657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.727 [2024-07-13 00:54:30.252663] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.252667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.252669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.727 [2024-07-13 00:54:30.252675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.727 [2024-07-13 00:54:30.252688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.727 [2024-07-13 00:54:30.252772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.727 [2024-07-13 00:54:30.252778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.727 [2024-07-13 00:54:30.252781] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.252784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.727 [2024-07-13 00:54:30.252788] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:18.727 [2024-07-13 00:54:30.252792] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:18.727 [2024-07-13 00:54:30.252799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.252803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.252806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.727 [2024-07-13 00:54:30.252811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.727 [2024-07-13 00:54:30.252820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.727 [2024-07-13 00:54:30.252890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.727 [2024-07-13 00:54:30.252896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.727 [2024-07-13 00:54:30.252899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.252902] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.727 [2024-07-13 00:54:30.252910] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.252914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.252917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.727 [2024-07-13 00:54:30.252923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.727 [2024-07-13 00:54:30.252931] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.727 [2024-07-13 00:54:30.253008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.727 [2024-07-13 00:54:30.253014] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.727 [2024-07-13 00:54:30.253017] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.253020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.727 [2024-07-13 00:54:30.253028] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.253031] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.253034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.727 [2024-07-13 00:54:30.253040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.727 [2024-07-13 00:54:30.253049] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.727 [2024-07-13 00:54:30.253113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.727 [2024-07-13 00:54:30.253119] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.727 [2024-07-13 00:54:30.253122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.253125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.727 [2024-07-13 00:54:30.253134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.253137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.253140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.727 [2024-07-13 00:54:30.253149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.727 [2024-07-13 00:54:30.253157] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.727 [2024-07-13 00:54:30.253231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.727 [2024-07-13 00:54:30.253237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.727 [2024-07-13 00:54:30.253240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.253243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.727 [2024-07-13 00:54:30.253251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.253255] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.253258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.727 [2024-07-13 00:54:30.253263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.727 [2024-07-13 00:54:30.253272] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.727 [2024-07-13 00:54:30.253344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.727 [2024-07-13 00:54:30.253349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.727 [2024-07-13 00:54:30.253352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.253356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.727 [2024-07-13 00:54:30.253363] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.253368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.253371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.727 [2024-07-13 00:54:30.253376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.727 [2024-07-13 00:54:30.253385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.727 [2024-07-13 00:54:30.253460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.727 [2024-07-13 00:54:30.253465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.727 [2024-07-13 00:54:30.253468] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.727 [2024-07-13 00:54:30.253471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.728 [2024-07-13 00:54:30.253479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.253483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.253486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.728 [2024-07-13 00:54:30.253491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.728 [2024-07-13 00:54:30.253500] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.728 [2024-07-13 00:54:30.253569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.728 [2024-07-13 00:54:30.253575] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.728 [2024-07-13 00:54:30.253578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.253581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.728 [2024-07-13 00:54:30.253590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.253593] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.253596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.728 [2024-07-13 00:54:30.253602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.728 [2024-07-13 00:54:30.253613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.728 [2024-07-13 00:54:30.253677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.728 [2024-07-13 00:54:30.253683] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.728 [2024-07-13 00:54:30.253685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.253689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.728 [2024-07-13 00:54:30.253697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.253700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.253703] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.728 [2024-07-13 00:54:30.253708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.728 [2024-07-13 00:54:30.253717] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.728 [2024-07-13 00:54:30.253794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.728 [2024-07-13 00:54:30.253799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.728 [2024-07-13 00:54:30.253802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.253805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.728 [2024-07-13 00:54:30.253813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.253817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.253820] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.728 [2024-07-13 00:54:30.253825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.728 [2024-07-13 00:54:30.253834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.728 [2024-07-13 00:54:30.253911] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.728 [2024-07-13 00:54:30.253917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.728 [2024-07-13 00:54:30.253920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.253923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.728 [2024-07-13 00:54:30.253931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.253934] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.253937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.728 [2024-07-13 00:54:30.253943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.728 [2024-07-13 00:54:30.253951] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.728 [2024-07-13 00:54:30.254015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.728 [2024-07-13 00:54:30.254021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.728 [2024-07-13 00:54:30.254024] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254027] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.728 [2024-07-13 00:54:30.254035] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.728 [2024-07-13 00:54:30.254048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.728 [2024-07-13 00:54:30.254056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.728 [2024-07-13 00:54:30.254124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.728 [2024-07-13 00:54:30.254130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.728 [2024-07-13 00:54:30.254133] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254136] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.728 [2024-07-13 00:54:30.254144] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254147] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254151] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.728 [2024-07-13 00:54:30.254156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.728 [2024-07-13 00:54:30.254165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.728 [2024-07-13 00:54:30.254243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.728 [2024-07-13 00:54:30.254249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.728 [2024-07-13 00:54:30.254252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.728 [2024-07-13 00:54:30.254263] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.728 [2024-07-13 00:54:30.254275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.728 [2024-07-13 00:54:30.254284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.728 [2024-07-13 00:54:30.254358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.728 [2024-07-13 00:54:30.254364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.728 [2024-07-13 00:54:30.254367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.728 [2024-07-13 00:54:30.254378] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.728 [2024-07-13 00:54:30.254390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.728 [2024-07-13 00:54:30.254398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.728 [2024-07-13 00:54:30.254465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.728 [2024-07-13 00:54:30.254470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.728 [2024-07-13 00:54:30.254473] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254477] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.728 [2024-07-13 00:54:30.254485] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254491] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.728 [2024-07-13 00:54:30.254497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.728 [2024-07-13 00:54:30.254506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.728 [2024-07-13 00:54:30.254567] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.728 [2024-07-13 00:54:30.254574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.728 [2024-07-13 00:54:30.254577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.728 [2024-07-13 00:54:30.254588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.728 [2024-07-13 00:54:30.254601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.728 [2024-07-13 00:54:30.254609] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.728 [2024-07-13 00:54:30.254686] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.728 [2024-07-13 00:54:30.254692] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.728 [2024-07-13 00:54:30.254695] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.728 [2024-07-13 00:54:30.254706] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254709] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.728 [2024-07-13 00:54:30.254718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.728 [2024-07-13 00:54:30.254727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.728 [2024-07-13 00:54:30.254801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.728 [2024-07-13 00:54:30.254807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.728 [2024-07-13 00:54:30.254810] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.728 [2024-07-13 00:54:30.254821] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.728 [2024-07-13 00:54:30.254825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.729 [2024-07-13 00:54:30.254827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.729 [2024-07-13 00:54:30.254833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.729 [2024-07-13 00:54:30.254842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.729 [2024-07-13 00:54:30.254908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.729 [2024-07-13 00:54:30.254914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.729 [2024-07-13 00:54:30.254917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.729 [2024-07-13 00:54:30.254920] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.729 [2024-07-13 00:54:30.254928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.729 [2024-07-13 00:54:30.254932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.729 [2024-07-13 00:54:30.254935] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.729 [2024-07-13 00:54:30.254940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.729 [2024-07-13 00:54:30.254949] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.729 [2024-07-13 00:54:30.255016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.729 [2024-07-13 00:54:30.255022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.729 [2024-07-13 00:54:30.255024] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.729 [2024-07-13 00:54:30.255029] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.729 [2024-07-13 00:54:30.255037] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.729 [2024-07-13 00:54:30.255041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.729 [2024-07-13 00:54:30.255044] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.729 [2024-07-13 00:54:30.255049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.729 [2024-07-13 00:54:30.255058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.729 [2024-07-13 00:54:30.255135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.729 [2024-07-13 00:54:30.255140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.729 [2024-07-13 00:54:30.255143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.729 [2024-07-13 00:54:30.255146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.729 [2024-07-13 00:54:30.255154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.729 [2024-07-13 00:54:30.255158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.729 [2024-07-13 00:54:30.255161] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.729 [2024-07-13 00:54:30.255166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.729 [2024-07-13 00:54:30.255175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.729 [2024-07-13 00:54:30.259234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.729 [2024-07-13 00:54:30.259242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.729 [2024-07-13 00:54:30.259245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.729 [2024-07-13 00:54:30.259248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.729 [2024-07-13 00:54:30.259257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:18.729 [2024-07-13 00:54:30.259261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:18.729 [2024-07-13 00:54:30.259264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x661af0) 00:29:18.729 [2024-07-13 00:54:30.259270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.729 [2024-07-13 00:54:30.259281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6ce7c0, cid 3, qid 0 00:29:18.729 [2024-07-13 00:54:30.259415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:18.729 [2024-07-13 00:54:30.259421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:18.729 [2024-07-13 00:54:30.259424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:18.729 [2024-07-13 00:54:30.259427] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6ce7c0) on tqpair=0x661af0 00:29:18.729 [2024-07-13 00:54:30.259433] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:29:18.729 0% 00:29:18.729 Data Units Read: 0 00:29:18.729 Data Units Written: 0 00:29:18.729 Host Read Commands: 0 00:29:18.729 Host Write Commands: 0 00:29:18.729 Controller Busy Time: 0 minutes 00:29:18.729 Power Cycles: 0 00:29:18.729 Power On Hours: 0 hours 00:29:18.729 Unsafe Shutdowns: 0 00:29:18.729 Unrecoverable Media Errors: 0 00:29:18.729 Lifetime Error Log Entries: 0 00:29:18.729 Warning Temperature Time: 0 minutes 00:29:18.729 Critical Temperature Time: 0 minutes 00:29:18.729 00:29:18.729 Number of Queues 00:29:18.729 ================ 00:29:18.729 Number of I/O Submission Queues: 127 00:29:18.729 Number of I/O Completion Queues: 127 00:29:18.729 00:29:18.729 Active Namespaces 00:29:18.729 ================= 00:29:18.729 Namespace ID:1 00:29:18.729 Error Recovery Timeout: Unlimited 00:29:18.729 Command Set Identifier: NVM (00h) 00:29:18.729 Deallocate: Supported 00:29:18.729 Deallocated/Unwritten Error: Not Supported 00:29:18.729 Deallocated Read Value: Unknown 00:29:18.729 Deallocate in Write Zeroes: Not Supported 00:29:18.729 Deallocated Guard Field: 0xFFFF 00:29:18.729 Flush: Supported 00:29:18.729 Reservation: Supported 00:29:18.729 Namespace Sharing Capabilities: Multiple Controllers 00:29:18.729 Size (in LBAs): 131072 (0GiB) 00:29:18.729 Capacity (in LBAs): 131072 (0GiB) 00:29:18.729 Utilization (in LBAs): 131072 (0GiB) 00:29:18.729 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:18.729 EUI64: ABCDEF0123456789 00:29:18.729 UUID: ff2c462b-7974-4820-b106-8a11cac2d898 00:29:18.729 Thin Provisioning: Not Supported 00:29:18.729 Per-NS Atomic Units: Yes 00:29:18.729 Atomic Boundary Size (Normal): 0 00:29:18.729 Atomic Boundary Size (PFail): 0 00:29:18.729 Atomic Boundary Offset: 0 00:29:18.729 Maximum Single Source Range Length: 65535 00:29:18.729 Maximum Copy Length: 65535 00:29:18.729 Maximum Source Range Count: 1 00:29:18.729 NGUID/EUI64 Never Reused: No 00:29:18.729 Namespace Write Protected: No 00:29:18.729 Number of LBA Formats: 1 00:29:18.729 Current LBA Format: LBA Format #00 00:29:18.729 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:18.729 00:29:18.729 00:54:30 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:18.989 00:54:30 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:18.989 00:54:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.989 00:54:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:18.989 00:54:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.989 00:54:30 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:18.989 00:54:30 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:18.989 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:18.989 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:29:18.989 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:18.989 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:29:18.989 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:18.989 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:18.989 rmmod nvme_tcp 00:29:18.989 rmmod nvme_fabrics 00:29:18.990 rmmod nvme_keyring 00:29:18.990 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:18.990 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:29:18.990 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:29:18.990 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1528156 ']' 00:29:18.990 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1528156 00:29:18.990 00:54:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1528156 ']' 00:29:18.990 00:54:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1528156 00:29:18.990 00:54:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:29:18.990 00:54:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:18.990 00:54:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1528156 00:29:18.990 00:54:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:18.990 00:54:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:18.990 00:54:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1528156' 00:29:18.990 killing process with pid 1528156 00:29:18.990 00:54:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1528156 00:29:18.990 00:54:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1528156 00:29:19.250 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:19.250 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:19.250 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:19.250 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:19.250 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:19.250 00:54:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.250 00:54:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:19.250 00:54:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.156 00:54:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:21.156 00:29:21.156 real 0m9.702s 00:29:21.156 user 0m8.120s 00:29:21.156 sys 0m4.663s 00:29:21.156 00:54:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:21.156 00:54:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:21.156 ************************************ 00:29:21.156 END TEST nvmf_identify 00:29:21.156 ************************************ 00:29:21.156 00:54:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:21.156 00:54:32 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:21.156 00:54:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:21.156 00:54:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.156 00:54:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:21.415 ************************************ 00:29:21.415 START TEST nvmf_perf 00:29:21.415 ************************************ 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:21.415 * Looking for test storage... 00:29:21.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:21.415 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:21.416 00:54:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:27.995 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:27.996 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:27.996 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:27.996 Found net devices under 0000:86:00.0: cvl_0_0 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:27.996 Found net devices under 0000:86:00.1: cvl_0_1 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:27.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:29:27.996 00:29:27.996 --- 10.0.0.2 ping statistics --- 00:29:27.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.996 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:29:27.996 00:29:27.996 --- 10.0.0.1 ping statistics --- 00:29:27.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.996 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1531783 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1531783 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1531783 ']' 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:27.996 00:54:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:27.996 [2024-07-13 00:54:38.653093] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:29:27.996 [2024-07-13 00:54:38.653137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.996 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.996 [2024-07-13 00:54:38.722437] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:27.996 [2024-07-13 00:54:38.765462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:27.996 [2024-07-13 00:54:38.765493] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:27.996 [2024-07-13 00:54:38.765501] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:27.996 [2024-07-13 00:54:38.765507] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:27.996 [2024-07-13 00:54:38.765513] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:27.996 [2024-07-13 00:54:38.765573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.996 [2024-07-13 00:54:38.765680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:27.996 [2024-07-13 00:54:38.765787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.996 [2024-07-13 00:54:38.765788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:27.996 00:54:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:27.996 00:54:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:29:27.996 00:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:27.996 00:54:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:27.996 00:54:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:27.996 00:54:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.996 00:54:39 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:27.996 00:54:39 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:31.331 00:54:42 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:31.331 00:54:42 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:31.331 00:54:42 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:29:31.331 00:54:42 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:31.331 00:54:42 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:31.331 00:54:42 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:29:31.331 00:54:42 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:31.331 00:54:42 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:31.331 00:54:42 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:31.588 [2024-07-13 00:54:43.024302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.588 00:54:43 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:31.847 00:54:43 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:31.847 00:54:43 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:32.105 00:54:43 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:32.105 00:54:43 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:32.105 00:54:43 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:32.362 [2024-07-13 00:54:43.760405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.362 00:54:43 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:32.621 00:54:43 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:29:32.621 00:54:43 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:32.621 00:54:43 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:32.621 00:54:43 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:34.000 Initializing NVMe Controllers 00:29:34.000 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:29:34.000 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:29:34.000 Initialization complete. Launching workers. 00:29:34.000 ======================================================== 00:29:34.000 Latency(us) 00:29:34.000 Device Information : IOPS MiB/s Average min max 00:29:34.000 PCIE (0000:5e:00.0) NSID 1 from core 0: 97402.70 380.48 328.12 9.63 5216.25 00:29:34.000 ======================================================== 00:29:34.000 Total : 97402.70 380.48 328.12 9.63 5216.25 00:29:34.000 00:29:34.000 00:54:45 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:34.000 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.379 Initializing NVMe Controllers 00:29:35.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:35.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:35.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:35.379 Initialization complete. Launching workers. 00:29:35.379 ======================================================== 00:29:35.379 Latency(us) 00:29:35.379 Device Information : IOPS MiB/s Average min max 00:29:35.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 95.95 0.37 10671.41 110.65 45425.72 00:29:35.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 65.97 0.26 15764.06 3993.73 47906.12 00:29:35.379 ======================================================== 00:29:35.379 Total : 161.91 0.63 12746.19 110.65 47906.12 00:29:35.379 00:29:35.379 00:54:46 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:35.379 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.756 Initializing NVMe Controllers 00:29:36.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:36.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:36.756 Initialization complete. Launching workers. 00:29:36.756 ======================================================== 00:29:36.756 Latency(us) 00:29:36.756 Device Information : IOPS MiB/s Average min max 00:29:36.756 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11168.68 43.63 2866.30 426.84 6174.34 00:29:36.756 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3883.59 15.17 8283.24 6737.25 47554.96 00:29:36.756 ======================================================== 00:29:36.756 Total : 15052.27 58.80 4263.91 426.84 47554.96 00:29:36.756 00:29:36.756 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:36.756 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:36.756 00:54:47 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.756 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.286 Initializing NVMe Controllers 00:29:39.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.286 Controller IO queue size 128, less than required. 00:29:39.286 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.286 Controller IO queue size 128, less than required. 00:29:39.286 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:39.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:39.286 Initialization complete. Launching workers. 00:29:39.286 ======================================================== 00:29:39.286 Latency(us) 00:29:39.286 Device Information : IOPS MiB/s Average min max 00:29:39.286 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1885.01 471.25 68847.72 44683.43 109732.47 00:29:39.286 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 624.51 156.13 214635.95 84030.96 329260.01 00:29:39.286 ======================================================== 00:29:39.286 Total : 2509.51 627.38 105128.02 44683.43 329260.01 00:29:39.286 00:29:39.286 00:54:50 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:39.286 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.286 No valid NVMe controllers or AIO or URING devices found 00:29:39.286 Initializing NVMe Controllers 00:29:39.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.286 Controller IO queue size 128, less than required. 00:29:39.286 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.286 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:39.286 Controller IO queue size 128, less than required. 00:29:39.286 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.286 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:39.286 WARNING: Some requested NVMe devices were skipped 00:29:39.286 00:54:50 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:39.286 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.822 Initializing NVMe Controllers 00:29:41.822 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:41.822 Controller IO queue size 128, less than required. 00:29:41.822 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:41.822 Controller IO queue size 128, less than required. 00:29:41.822 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:41.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:41.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:41.822 Initialization complete. Launching workers. 00:29:41.822 00:29:41.822 ==================== 00:29:41.822 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:41.822 TCP transport: 00:29:41.822 polls: 13089 00:29:41.822 idle_polls: 7738 00:29:41.822 sock_completions: 5351 00:29:41.822 nvme_completions: 6497 00:29:41.822 submitted_requests: 9728 00:29:41.822 queued_requests: 1 00:29:41.822 00:29:41.822 ==================== 00:29:41.822 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:41.822 TCP transport: 00:29:41.822 polls: 9381 00:29:41.822 idle_polls: 4402 00:29:41.822 sock_completions: 4979 00:29:41.822 nvme_completions: 6991 00:29:41.822 submitted_requests: 10410 00:29:41.822 queued_requests: 1 00:29:41.822 ======================================================== 00:29:41.822 Latency(us) 00:29:41.822 Device Information : IOPS MiB/s Average min max 00:29:41.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1623.90 405.98 80599.06 48914.51 139386.95 00:29:41.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1747.40 436.85 73784.04 34815.65 100518.41 00:29:41.822 ======================================================== 00:29:41.822 Total : 3371.30 842.82 77066.73 34815.65 139386.95 00:29:41.822 00:29:41.822 00:54:53 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:41.822 00:54:53 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:41.822 00:54:53 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:41.822 00:54:53 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:29:41.822 00:54:53 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:45.131 00:54:56 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=f61d3620-6355-449f-97cc-bf5e90c8c6da 00:29:45.131 00:54:56 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb f61d3620-6355-449f-97cc-bf5e90c8c6da 00:29:45.131 00:54:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=f61d3620-6355-449f-97cc-bf5e90c8c6da 00:29:45.131 00:54:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:45.131 00:54:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:45.131 00:54:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:45.131 00:54:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:45.390 00:54:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:45.390 { 00:29:45.390 "uuid": "f61d3620-6355-449f-97cc-bf5e90c8c6da", 00:29:45.390 "name": "lvs_0", 00:29:45.390 "base_bdev": "Nvme0n1", 00:29:45.390 "total_data_clusters": 238234, 00:29:45.390 "free_clusters": 238234, 00:29:45.390 "block_size": 512, 00:29:45.390 "cluster_size": 4194304 00:29:45.390 } 00:29:45.390 ]' 00:29:45.390 00:54:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="f61d3620-6355-449f-97cc-bf5e90c8c6da") .free_clusters' 00:29:45.390 00:54:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:29:45.390 00:54:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="f61d3620-6355-449f-97cc-bf5e90c8c6da") .cluster_size' 00:29:45.390 00:54:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:45.390 00:54:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:29:45.390 00:54:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:29:45.390 952936 00:29:45.390 00:54:56 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:29:45.390 00:54:56 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:29:45.390 00:54:56 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f61d3620-6355-449f-97cc-bf5e90c8c6da lbd_0 20480 00:29:45.957 00:54:57 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=37f45b28-44ee-4db2-860a-5544ce96674c 00:29:45.957 00:54:57 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 37f45b28-44ee-4db2-860a-5544ce96674c lvs_n_0 00:29:46.526 00:54:57 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=f74d1586-ed83-4c1e-87fa-7aed0a554860 00:29:46.526 00:54:57 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb f74d1586-ed83-4c1e-87fa-7aed0a554860 00:29:46.526 00:54:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=f74d1586-ed83-4c1e-87fa-7aed0a554860 00:29:46.526 00:54:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:46.526 00:54:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:29:46.526 00:54:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:29:46.526 00:54:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:46.526 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:46.526 { 00:29:46.526 "uuid": "f61d3620-6355-449f-97cc-bf5e90c8c6da", 00:29:46.526 "name": "lvs_0", 00:29:46.526 "base_bdev": "Nvme0n1", 00:29:46.526 "total_data_clusters": 238234, 00:29:46.526 "free_clusters": 233114, 00:29:46.526 "block_size": 512, 00:29:46.526 "cluster_size": 4194304 00:29:46.526 }, 00:29:46.526 { 00:29:46.526 "uuid": "f74d1586-ed83-4c1e-87fa-7aed0a554860", 00:29:46.526 "name": "lvs_n_0", 00:29:46.526 "base_bdev": "37f45b28-44ee-4db2-860a-5544ce96674c", 00:29:46.526 "total_data_clusters": 5114, 00:29:46.526 "free_clusters": 5114, 00:29:46.526 "block_size": 512, 00:29:46.526 "cluster_size": 4194304 00:29:46.526 } 00:29:46.526 ]' 00:29:46.526 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="f74d1586-ed83-4c1e-87fa-7aed0a554860") .free_clusters' 00:29:46.785 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:29:46.785 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="f74d1586-ed83-4c1e-87fa-7aed0a554860") .cluster_size' 00:29:46.785 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:46.785 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:29:46.785 00:54:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:29:46.785 20456 00:29:46.785 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:29:46.785 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f74d1586-ed83-4c1e-87fa-7aed0a554860 lbd_nest_0 20456 00:29:47.044 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=9020bcb9-8728-4e74-84a7-6021e2b0333d 00:29:47.045 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:47.045 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:47.045 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 9020bcb9-8728-4e74-84a7-6021e2b0333d 00:29:47.303 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.562 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:47.562 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:47.562 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:47.562 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:47.562 00:54:58 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:47.562 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.829 Initializing NVMe Controllers 00:29:59.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:59.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:59.829 Initialization complete. Launching workers. 00:29:59.829 ======================================================== 00:29:59.829 Latency(us) 00:29:59.829 Device Information : IOPS MiB/s Average min max 00:29:59.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.99 0.02 21339.12 131.05 45755.23 00:29:59.829 ======================================================== 00:29:59.829 Total : 46.99 0.02 21339.12 131.05 45755.23 00:29:59.829 00:29:59.829 00:55:09 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:59.829 00:55:09 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:59.829 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.805 Initializing NVMe Controllers 00:30:09.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:09.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:09.805 Initialization complete. Launching workers. 00:30:09.805 ======================================================== 00:30:09.805 Latency(us) 00:30:09.805 Device Information : IOPS MiB/s Average min max 00:30:09.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 60.79 7.60 16451.09 7388.32 51874.95 00:30:09.805 ======================================================== 00:30:09.805 Total : 60.79 7.60 16451.09 7388.32 51874.95 00:30:09.805 00:30:09.805 00:55:19 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:09.805 00:55:19 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:09.805 00:55:19 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:09.805 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.781 Initializing NVMe Controllers 00:30:19.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:19.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:19.781 Initialization complete. Launching workers. 00:30:19.781 ======================================================== 00:30:19.781 Latency(us) 00:30:19.781 Device Information : IOPS MiB/s Average min max 00:30:19.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8529.06 4.16 3753.84 230.07 45278.44 00:30:19.781 ======================================================== 00:30:19.781 Total : 8529.06 4.16 3753.84 230.07 45278.44 00:30:19.781 00:30:19.781 00:55:29 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:19.781 00:55:29 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:19.781 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.760 Initializing NVMe Controllers 00:30:29.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:29.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:29.760 Initialization complete. Launching workers. 00:30:29.760 ======================================================== 00:30:29.760 Latency(us) 00:30:29.761 Device Information : IOPS MiB/s Average min max 00:30:29.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4166.45 520.81 7680.70 565.93 17590.88 00:30:29.761 ======================================================== 00:30:29.761 Total : 4166.45 520.81 7680.70 565.93 17590.88 00:30:29.761 00:30:29.761 00:55:40 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:29.761 00:55:40 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:29.761 00:55:40 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:29.761 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.742 Initializing NVMe Controllers 00:30:39.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:39.742 Controller IO queue size 128, less than required. 00:30:39.742 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:39.742 Initialization complete. Launching workers. 00:30:39.742 ======================================================== 00:30:39.742 Latency(us) 00:30:39.742 Device Information : IOPS MiB/s Average min max 00:30:39.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15874.20 7.75 8068.48 1373.74 47944.76 00:30:39.742 ======================================================== 00:30:39.742 Total : 15874.20 7.75 8068.48 1373.74 47944.76 00:30:39.742 00:30:39.742 00:55:50 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:39.742 00:55:50 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:39.742 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.737 Initializing NVMe Controllers 00:30:49.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:49.737 Controller IO queue size 128, less than required. 00:30:49.737 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:49.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:49.737 Initialization complete. Launching workers. 00:30:49.737 ======================================================== 00:30:49.737 Latency(us) 00:30:49.737 Device Information : IOPS MiB/s Average min max 00:30:49.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1211.00 151.37 106409.30 23650.80 215555.92 00:30:49.738 ======================================================== 00:30:49.738 Total : 1211.00 151.37 106409.30 23650.80 215555.92 00:30:49.738 00:30:49.738 00:56:01 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:49.996 00:56:01 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9020bcb9-8728-4e74-84a7-6021e2b0333d 00:30:50.564 00:56:01 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:50.822 00:56:02 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 37f45b28-44ee-4db2-860a-5544ce96674c 00:30:50.822 00:56:02 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:51.080 rmmod nvme_tcp 00:30:51.080 rmmod nvme_fabrics 00:30:51.080 rmmod nvme_keyring 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1531783 ']' 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1531783 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1531783 ']' 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1531783 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:51.080 00:56:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1531783 00:30:51.338 00:56:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:51.338 00:56:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:51.338 00:56:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1531783' 00:30:51.338 killing process with pid 1531783 00:30:51.338 00:56:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1531783 00:30:51.338 00:56:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1531783 00:30:52.714 00:56:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:52.714 00:56:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:52.714 00:56:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:52.714 00:56:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:52.714 00:56:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:52.714 00:56:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.714 00:56:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:52.714 00:56:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.246 00:56:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:55.246 00:30:55.246 real 1m33.474s 00:30:55.246 user 5m34.492s 00:30:55.246 sys 0m16.174s 00:30:55.246 00:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:55.246 00:56:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:55.246 ************************************ 00:30:55.246 END TEST nvmf_perf 00:30:55.246 ************************************ 00:30:55.246 00:56:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:55.246 00:56:06 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:55.246 00:56:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:55.246 00:56:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:55.246 00:56:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:55.246 ************************************ 00:30:55.246 START TEST nvmf_fio_host 00:30:55.246 ************************************ 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:55.246 * Looking for test storage... 00:30:55.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.246 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:55.247 00:56:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:00.521 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:00.521 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:00.521 Found net devices under 0000:86:00.0: cvl_0_0 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:00.521 Found net devices under 0000:86:00.1: cvl_0_1 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:00.521 00:56:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:00.521 00:56:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:00.521 00:56:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:00.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:31:00.781 00:31:00.781 --- 10.0.0.2 ping statistics --- 00:31:00.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.781 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:00.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:31:00.781 00:31:00.781 --- 10.0.0.1 ping statistics --- 00:31:00.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.781 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1548865 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1548865 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1548865 ']' 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:00.781 00:56:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.781 [2024-07-13 00:56:12.187447] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:00.781 [2024-07-13 00:56:12.187496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.781 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.781 [2024-07-13 00:56:12.258797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:00.781 [2024-07-13 00:56:12.300824] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.781 [2024-07-13 00:56:12.300863] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.781 [2024-07-13 00:56:12.300870] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.781 [2024-07-13 00:56:12.300876] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.781 [2024-07-13 00:56:12.300881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.781 [2024-07-13 00:56:12.300924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.781 [2024-07-13 00:56:12.301038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:00.781 [2024-07-13 00:56:12.301144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.781 [2024-07-13 00:56:12.301145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:01.040 00:56:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:01.040 00:56:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:31:01.040 00:56:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:01.040 [2024-07-13 00:56:12.553706] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.040 00:56:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:01.040 00:56:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:01.040 00:56:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.299 00:56:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:01.299 Malloc1 00:31:01.299 00:56:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:01.557 00:56:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:01.815 00:56:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.815 [2024-07-13 00:56:13.339822] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.816 00:56:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:02.074 00:56:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:02.074 00:56:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:02.074 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:02.075 00:56:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:02.334 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:02.334 fio-3.35 00:31:02.334 Starting 1 thread 00:31:02.334 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.868 [2024-07-13 00:56:16.315133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c410 is same with the state(5) to be set 00:31:04.868 [2024-07-13 00:56:16.315181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c410 is same with the state(5) to be set 00:31:04.868 [2024-07-13 00:56:16.315189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c410 is same with the state(5) to be set 00:31:04.868 00:31:04.868 test: (groupid=0, jobs=1): err= 0: pid=1549391: Sat Jul 13 00:56:16 2024 00:31:04.868 read: IOPS=11.8k, BW=46.0MiB/s (48.2MB/s)(92.2MiB/2005msec) 00:31:04.868 slat (nsec): min=1602, max=219317, avg=1741.15, stdev=2030.84 00:31:04.868 clat (usec): min=2866, max=10823, avg=5992.24, stdev=448.86 00:31:04.868 lat (usec): min=2896, max=10824, avg=5993.98, stdev=448.76 00:31:04.868 clat percentiles (usec): 00:31:04.868 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5669], 00:31:04.868 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 5997], 60.00th=[ 6128], 00:31:04.868 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:31:04.868 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8094], 99.95th=[ 9372], 00:31:04.868 | 99.99th=[10814] 00:31:04.868 bw ( KiB/s): min=45948, max=47824, per=99.96%, avg=47067.00, stdev=858.44, samples=4 00:31:04.869 iops : min=11487, max=11956, avg=11766.75, stdev=214.61, samples=4 00:31:04.869 write: IOPS=11.7k, BW=45.7MiB/s (48.0MB/s)(91.7MiB/2005msec); 0 zone resets 00:31:04.869 slat (nsec): min=1643, max=206880, avg=1817.69, stdev=1512.17 00:31:04.869 clat (usec): min=2245, max=9565, avg=4843.83, stdev=383.28 00:31:04.869 lat (usec): min=2259, max=9567, avg=4845.65, stdev=383.26 00:31:04.869 clat percentiles (usec): 00:31:04.869 | 1.00th=[ 3982], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:31:04.869 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:31:04.869 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:31:04.869 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 8160], 99.95th=[ 8717], 00:31:04.869 | 99.99th=[ 9503] 00:31:04.869 bw ( KiB/s): min=46506, max=47104, per=99.90%, avg=46794.50, stdev=290.39, samples=4 00:31:04.869 iops : min=11626, max=11776, avg=11698.50, stdev=72.76, samples=4 00:31:04.869 lat (msec) : 4=0.59%, 10=99.40%, 20=0.01% 00:31:04.869 cpu : usr=75.35%, sys=23.25%, ctx=55, majf=0, minf=6 00:31:04.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:04.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:04.869 issued rwts: total=23601,23480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.869 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:04.869 00:31:04.869 Run status group 0 (all jobs): 00:31:04.869 READ: bw=46.0MiB/s (48.2MB/s), 46.0MiB/s-46.0MiB/s (48.2MB/s-48.2MB/s), io=92.2MiB (96.7MB), run=2005-2005msec 00:31:04.869 WRITE: bw=45.7MiB/s (48.0MB/s), 45.7MiB/s-45.7MiB/s (48.0MB/s-48.0MB/s), io=91.7MiB (96.2MB), run=2005-2005msec 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:04.869 00:56:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:05.128 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:05.128 fio-3.35 00:31:05.128 Starting 1 thread 00:31:05.386 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.759 [2024-07-13 00:56:18.151807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x698430 is same with the state(5) to be set 00:31:07.693 [2024-07-13 00:56:19.072211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x699160 is same with the state(5) to be set 00:31:07.693 00:31:07.693 test: (groupid=0, jobs=1): err= 0: pid=1549906: Sat Jul 13 00:56:19 2024 00:31:07.693 read: IOPS=10.4k, BW=163MiB/s (171MB/s)(327MiB/2007msec) 00:31:07.693 slat (nsec): min=2594, max=84111, avg=2828.15, stdev=1190.80 00:31:07.693 clat (usec): min=1600, max=50526, avg=7204.16, stdev=4578.00 00:31:07.693 lat (usec): min=1603, max=50529, avg=7206.99, stdev=4578.02 00:31:07.693 clat percentiles (usec): 00:31:07.693 | 1.00th=[ 3687], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5342], 00:31:07.693 | 30.00th=[ 5866], 40.00th=[ 6259], 50.00th=[ 6849], 60.00th=[ 7308], 00:31:07.693 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8717], 95.00th=[ 9503], 00:31:07.693 | 99.00th=[43779], 99.50th=[46924], 99.90th=[49021], 99.95th=[49021], 00:31:07.693 | 99.99th=[49546] 00:31:07.693 bw ( KiB/s): min=75264, max=97280, per=50.96%, avg=85136.00, stdev=9977.31, samples=4 00:31:07.693 iops : min= 4704, max= 6080, avg=5321.00, stdev=623.58, samples=4 00:31:07.693 write: IOPS=6489, BW=101MiB/s (106MB/s)(174MiB/1713msec); 0 zone resets 00:31:07.693 slat (usec): min=30, max=318, avg=31.86, stdev= 6.15 00:31:07.693 clat (usec): min=4588, max=15102, avg=8669.85, stdev=1506.97 00:31:07.693 lat (usec): min=4619, max=15133, avg=8701.71, stdev=1507.79 00:31:07.693 clat percentiles (usec): 00:31:07.693 | 1.00th=[ 5800], 5.00th=[ 6587], 10.00th=[ 6915], 20.00th=[ 7373], 00:31:07.693 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:31:07.693 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[11600], 00:31:07.693 | 99.00th=[12649], 99.50th=[13042], 99.90th=[14091], 99.95th=[14484], 00:31:07.693 | 99.99th=[15008] 00:31:07.693 bw ( KiB/s): min=77792, max=101376, per=85.14%, avg=88408.00, stdev=10657.57, samples=4 00:31:07.693 iops : min= 4862, max= 6336, avg=5525.50, stdev=666.10, samples=4 00:31:07.693 lat (msec) : 2=0.03%, 4=1.55%, 10=89.57%, 20=8.06%, 50=0.79% 00:31:07.693 lat (msec) : 100=0.01% 00:31:07.693 cpu : usr=85.69%, sys=13.66%, ctx=45, majf=0, minf=3 00:31:07.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:31:07.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:07.693 issued rwts: total=20958,11117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.693 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:07.693 00:31:07.693 Run status group 0 (all jobs): 00:31:07.693 READ: bw=163MiB/s (171MB/s), 163MiB/s-163MiB/s (171MB/s-171MB/s), io=327MiB (343MB), run=2007-2007msec 00:31:07.693 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=174MiB (182MB), run=1713-1713msec 00:31:07.693 00:56:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.951 00:56:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:07.951 00:56:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:07.951 00:56:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:07.951 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:31:07.951 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:31:07.951 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:07.951 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:07.951 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:31:07.951 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:31:07.951 00:56:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:31:07.951 00:56:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:31:11.236 Nvme0n1 00:31:11.236 00:56:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:13.766 00:56:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=a2959640-5f73-402e-b54a-eda266912477 00:31:13.766 00:56:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb a2959640-5f73-402e-b54a-eda266912477 00:31:13.766 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=a2959640-5f73-402e-b54a-eda266912477 00:31:13.766 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:13.766 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:13.766 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:13.767 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:14.101 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:14.101 { 00:31:14.101 "uuid": "a2959640-5f73-402e-b54a-eda266912477", 00:31:14.101 "name": "lvs_0", 00:31:14.101 "base_bdev": "Nvme0n1", 00:31:14.101 "total_data_clusters": 930, 00:31:14.101 "free_clusters": 930, 00:31:14.101 "block_size": 512, 00:31:14.101 "cluster_size": 1073741824 00:31:14.101 } 00:31:14.101 ]' 00:31:14.101 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a2959640-5f73-402e-b54a-eda266912477") .free_clusters' 00:31:14.101 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:31:14.101 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a2959640-5f73-402e-b54a-eda266912477") .cluster_size' 00:31:14.101 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:31:14.101 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:31:14.101 00:56:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:31:14.101 952320 00:31:14.101 00:56:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:14.360 b3f558e4-9e55-4fb7-902e-24f1f6ac7471 00:31:14.360 00:56:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:14.619 00:56:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:14.878 00:56:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:15.136 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:15.136 fio-3.35 00:31:15.136 Starting 1 thread 00:31:15.393 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.950 00:31:17.950 test: (groupid=0, jobs=1): err= 0: pid=1551609: Sat Jul 13 00:56:29 2024 00:31:17.950 read: IOPS=7971, BW=31.1MiB/s (32.6MB/s)(62.5MiB/2006msec) 00:31:17.950 slat (nsec): min=1603, max=92191, avg=1702.92, stdev=1007.27 00:31:17.950 clat (usec): min=652, max=169781, avg=8838.94, stdev=10310.36 00:31:17.950 lat (usec): min=654, max=169800, avg=8840.64, stdev=10310.51 00:31:17.950 clat percentiles (msec): 00:31:17.950 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:31:17.950 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:31:17.950 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:31:17.950 | 99.00th=[ 10], 99.50th=[ 13], 99.90th=[ 169], 99.95th=[ 169], 00:31:17.950 | 99.99th=[ 169] 00:31:17.950 bw ( KiB/s): min=22536, max=35016, per=99.87%, avg=31844.00, stdev=6205.70, samples=4 00:31:17.950 iops : min= 5634, max= 8754, avg=7961.00, stdev=1551.42, samples=4 00:31:17.950 write: IOPS=7948, BW=31.0MiB/s (32.6MB/s)(62.3MiB/2006msec); 0 zone resets 00:31:17.950 slat (nsec): min=1659, max=92165, avg=1775.68, stdev=781.38 00:31:17.950 clat (usec): min=221, max=168385, avg=7129.99, stdev=9640.51 00:31:17.950 lat (usec): min=223, max=168390, avg=7131.77, stdev=9640.70 00:31:17.950 clat percentiles (msec): 00:31:17.950 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:31:17.950 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:31:17.950 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:31:17.950 | 99.00th=[ 8], 99.50th=[ 11], 99.90th=[ 169], 99.95th=[ 169], 00:31:17.950 | 99.99th=[ 169] 00:31:17.950 bw ( KiB/s): min=23464, max=34688, per=99.93%, avg=31770.00, stdev=5538.40, samples=4 00:31:17.950 iops : min= 5866, max= 8672, avg=7942.50, stdev=1384.60, samples=4 00:31:17.950 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:31:17.950 lat (msec) : 2=0.05%, 4=0.19%, 10=99.17%, 20=0.17%, 250=0.40% 00:31:17.950 cpu : usr=72.27%, sys=26.58%, ctx=95, majf=0, minf=6 00:31:17.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:17.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:17.950 issued rwts: total=15990,15944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.950 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:17.950 00:31:17.950 Run status group 0 (all jobs): 00:31:17.950 READ: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=62.5MiB (65.5MB), run=2006-2006msec 00:31:17.950 WRITE: bw=31.0MiB/s (32.6MB/s), 31.0MiB/s-31.0MiB/s (32.6MB/s-32.6MB/s), io=62.3MiB (65.3MB), run=2006-2006msec 00:31:17.950 00:56:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:17.950 00:56:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:18.886 00:56:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=5b3a2873-dbae-40a2-a408-7f8e5e6555c6 00:31:18.886 00:56:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 5b3a2873-dbae-40a2-a408-7f8e5e6555c6 00:31:18.886 00:56:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=5b3a2873-dbae-40a2-a408-7f8e5e6555c6 00:31:18.886 00:56:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:18.886 00:56:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:18.886 00:56:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:18.886 00:56:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:19.145 00:56:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:19.145 { 00:31:19.145 "uuid": "a2959640-5f73-402e-b54a-eda266912477", 00:31:19.145 "name": "lvs_0", 00:31:19.145 "base_bdev": "Nvme0n1", 00:31:19.145 "total_data_clusters": 930, 00:31:19.145 "free_clusters": 0, 00:31:19.145 "block_size": 512, 00:31:19.145 "cluster_size": 1073741824 00:31:19.145 }, 00:31:19.145 { 00:31:19.145 "uuid": "5b3a2873-dbae-40a2-a408-7f8e5e6555c6", 00:31:19.145 "name": "lvs_n_0", 00:31:19.145 "base_bdev": "b3f558e4-9e55-4fb7-902e-24f1f6ac7471", 00:31:19.145 "total_data_clusters": 237847, 00:31:19.145 "free_clusters": 237847, 00:31:19.145 "block_size": 512, 00:31:19.145 "cluster_size": 4194304 00:31:19.145 } 00:31:19.145 ]' 00:31:19.145 00:56:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5b3a2873-dbae-40a2-a408-7f8e5e6555c6") .free_clusters' 00:31:19.145 00:56:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:31:19.145 00:56:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5b3a2873-dbae-40a2-a408-7f8e5e6555c6") .cluster_size' 00:31:19.145 00:56:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:19.145 00:56:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:31:19.145 00:56:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:31:19.145 951388 00:31:19.145 00:56:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:19.713 5411bd00-aa0a-4a8f-b47c-70315ed8da28 00:31:19.713 00:56:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:19.972 00:56:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:19.972 00:56:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:20.231 00:56:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:20.490 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:20.490 fio-3.35 00:31:20.490 Starting 1 thread 00:31:20.490 EAL: No free 2048 kB hugepages reported on node 1 00:31:23.026 [2024-07-13 00:56:34.364494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5e80 is same with the state(5) to be set 00:31:23.026 00:31:23.026 test: (groupid=0, jobs=1): err= 0: pid=1552600: Sat Jul 13 00:56:34 2024 00:31:23.026 read: IOPS=7774, BW=30.4MiB/s (31.8MB/s)(61.0MiB/2007msec) 00:31:23.026 slat (nsec): min=1586, max=104160, avg=1696.53, stdev=1141.38 00:31:23.026 clat (usec): min=3114, max=14859, avg=9083.06, stdev=787.68 00:31:23.026 lat (usec): min=3117, max=14860, avg=9084.76, stdev=787.62 00:31:23.026 clat percentiles (usec): 00:31:23.026 | 1.00th=[ 7177], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8455], 00:31:23.026 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:31:23.026 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10290], 00:31:23.026 | 99.00th=[10814], 99.50th=[10945], 99.90th=[12518], 99.95th=[13698], 00:31:23.026 | 99.99th=[14877] 00:31:23.026 bw ( KiB/s): min=30112, max=31560, per=99.89%, avg=31066.00, stdev=677.04, samples=4 00:31:23.026 iops : min= 7528, max= 7890, avg=7766.50, stdev=169.26, samples=4 00:31:23.026 write: IOPS=7758, BW=30.3MiB/s (31.8MB/s)(60.8MiB/2007msec); 0 zone resets 00:31:23.026 slat (nsec): min=1634, max=82775, avg=1772.12, stdev=743.13 00:31:23.026 clat (usec): min=1468, max=13560, avg=7327.10, stdev=654.76 00:31:23.026 lat (usec): min=1473, max=13562, avg=7328.87, stdev=654.73 00:31:23.026 clat percentiles (usec): 00:31:23.026 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 6849], 00:31:23.026 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7504], 00:31:23.026 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8291], 00:31:23.026 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[11207], 99.95th=[11469], 00:31:23.026 | 99.99th=[12649] 00:31:23.026 bw ( KiB/s): min=30912, max=31192, per=99.99%, avg=31032.00, stdev=120.27, samples=4 00:31:23.026 iops : min= 7728, max= 7798, avg=7758.00, stdev=30.07, samples=4 00:31:23.026 lat (msec) : 2=0.01%, 4=0.09%, 10=94.49%, 20=5.41% 00:31:23.026 cpu : usr=71.09%, sys=27.87%, ctx=49, majf=0, minf=6 00:31:23.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:23.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:23.026 issued rwts: total=15604,15572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:23.026 00:31:23.026 Run status group 0 (all jobs): 00:31:23.026 READ: bw=30.4MiB/s (31.8MB/s), 30.4MiB/s-30.4MiB/s (31.8MB/s-31.8MB/s), io=61.0MiB (63.9MB), run=2007-2007msec 00:31:23.027 WRITE: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=60.8MiB (63.8MB), run=2007-2007msec 00:31:23.027 00:56:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:23.286 00:56:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:23.286 00:56:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:27.481 00:56:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:27.481 00:56:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:30.018 00:56:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:30.018 00:56:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:31.924 rmmod nvme_tcp 00:31:31.924 rmmod nvme_fabrics 00:31:31.924 rmmod nvme_keyring 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1548865 ']' 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1548865 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1548865 ']' 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1548865 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1548865 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1548865' 00:31:31.924 killing process with pid 1548865 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1548865 00:31:31.924 00:56:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1548865 00:31:32.182 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:32.182 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:32.182 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:32.182 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:32.182 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:32.182 00:56:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.182 00:56:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:32.182 00:56:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.720 00:56:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:34.720 00:31:34.720 real 0m39.429s 00:31:34.720 user 2m37.776s 00:31:34.720 sys 0m8.648s 00:31:34.720 00:56:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:34.720 00:56:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.720 ************************************ 00:31:34.720 END TEST nvmf_fio_host 00:31:34.720 ************************************ 00:31:34.720 00:56:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:34.720 00:56:45 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:34.720 00:56:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:34.720 00:56:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:34.720 00:56:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:34.720 ************************************ 00:31:34.720 START TEST nvmf_failover 00:31:34.720 ************************************ 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:34.720 * Looking for test storage... 00:31:34.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:34.720 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:34.721 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:34.721 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:34.721 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:34.721 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.721 00:56:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:34.721 00:56:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.721 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:34.721 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:34.721 00:56:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:31:34.721 00:56:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:39.997 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:39.998 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:39.998 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:39.998 Found net devices under 0000:86:00.0: cvl_0_0 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:39.998 Found net devices under 0000:86:00.1: cvl_0_1 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:39.998 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:40.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:31:40.281 00:31:40.281 --- 10.0.0.2 ping statistics --- 00:31:40.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.281 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:31:40.281 00:31:40.281 --- 10.0.0.1 ping statistics --- 00:31:40.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.281 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1557914 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1557914 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1557914 ']' 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:40.281 00:56:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:40.281 [2024-07-13 00:56:51.684835] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:40.281 [2024-07-13 00:56:51.684877] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.281 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.281 [2024-07-13 00:56:51.756869] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:40.281 [2024-07-13 00:56:51.797161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.281 [2024-07-13 00:56:51.797202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.281 [2024-07-13 00:56:51.797209] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.281 [2024-07-13 00:56:51.797215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.281 [2024-07-13 00:56:51.797220] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.281 [2024-07-13 00:56:51.797364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.281 [2024-07-13 00:56:51.797471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.281 [2024-07-13 00:56:51.797472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:41.229 00:56:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:41.229 00:56:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:41.229 00:56:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:41.229 00:56:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:41.229 00:56:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:41.229 00:56:52 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:41.229 00:56:52 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:41.229 [2024-07-13 00:56:52.688543] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:41.229 00:56:52 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:41.486 Malloc0 00:31:41.486 00:56:52 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:41.744 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:42.002 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:42.002 [2024-07-13 00:56:53.467904] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.002 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:42.260 [2024-07-13 00:56:53.652412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:42.260 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:42.518 [2024-07-13 00:56:53.824972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:42.518 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1558204 00:31:42.518 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:42.518 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:42.518 00:56:53 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1558204 /var/tmp/bdevperf.sock 00:31:42.518 00:56:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1558204 ']' 00:31:42.518 00:56:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:42.518 00:56:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:42.518 00:56:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:42.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:42.518 00:56:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:42.518 00:56:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:43.453 00:56:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:43.453 00:56:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:43.453 00:56:54 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:43.711 NVMe0n1 00:31:43.711 00:56:55 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:43.970 00:31:43.970 00:56:55 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1558434 00:31:43.970 00:56:55 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:43.970 00:56:55 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:44.907 00:56:56 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:45.165 [2024-07-13 00:56:56.559890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.559960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.559968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.559974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.559980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.559986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.559992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.559998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.165 [2024-07-13 00:56:56.560125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.166 [2024-07-13 00:56:56.560130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241b270 is same with the state(5) to be set 00:31:45.166 00:56:56 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:48.453 00:56:59 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:48.453 00:31:48.453 00:56:59 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:48.712 [2024-07-13 00:57:00.164725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c630 is same with the state(5) to be set 00:31:48.712 [2024-07-13 00:57:00.164766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c630 is same with the state(5) to be set 00:31:48.712 [2024-07-13 00:57:00.164773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c630 is same with the state(5) to be set 00:31:48.712 [2024-07-13 00:57:00.164780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c630 is same with the state(5) to be set 00:31:48.712 [2024-07-13 00:57:00.164786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c630 is same with the state(5) to be set 00:31:48.712 [2024-07-13 00:57:00.164792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c630 is same with the state(5) to be set 00:31:48.712 [2024-07-13 00:57:00.164798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c630 is same with the state(5) to be set 00:31:48.712 [2024-07-13 00:57:00.164804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c630 is same with the state(5) to be set 00:31:48.712 [2024-07-13 00:57:00.164815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c630 is same with the state(5) to be set 00:31:48.712 [2024-07-13 00:57:00.164821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c630 is same with the state(5) to be set 00:31:48.712 [2024-07-13 00:57:00.164827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c630 is same with the state(5) to be set 00:31:48.712 [2024-07-13 00:57:00.164833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c630 is same with the state(5) to be set 00:31:48.712 00:57:00 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:52.001 00:57:03 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:52.001 [2024-07-13 00:57:03.369409] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:52.001 00:57:03 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:52.936 00:57:04 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:53.194 [2024-07-13 00:57:04.567496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.194 [2024-07-13 00:57:04.567537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.194 [2024-07-13 00:57:04.567544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.194 [2024-07-13 00:57:04.567551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.194 [2024-07-13 00:57:04.567557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.194 [2024-07-13 00:57:04.567563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.194 [2024-07-13 00:57:04.567569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 [2024-07-13 00:57:04.567694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241cd10 is same with the state(5) to be set 00:31:53.195 00:57:04 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1558434 00:31:59.767 0 00:31:59.767 00:57:10 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1558204 00:31:59.767 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1558204 ']' 00:31:59.767 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1558204 00:31:59.767 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:59.767 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:59.767 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1558204 00:31:59.767 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:59.767 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:59.767 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1558204' 00:31:59.767 killing process with pid 1558204 00:31:59.767 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1558204 00:31:59.767 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1558204 00:31:59.767 00:57:10 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:59.767 [2024-07-13 00:56:53.897536] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:59.767 [2024-07-13 00:56:53.897592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1558204 ] 00:31:59.767 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.767 [2024-07-13 00:56:53.963144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.767 [2024-07-13 00:56:54.003999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.767 Running I/O for 15 seconds... 00:31:59.767 [2024-07-13 00:56:56.561712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.767 [2024-07-13 00:56:56.561747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.767 [2024-07-13 00:56:56.561763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.767 [2024-07-13 00:56:56.561771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.767 [2024-07-13 00:56:56.561780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.561787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.561795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.561802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.561810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.561817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.561825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.561831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.561839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.561845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.561853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.561860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.561868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.561874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.561882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.561889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.561897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.561903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.561916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.561923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.561932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.561938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.561946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.561953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.561961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.561968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.561976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.561982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.561990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.561997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.562011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.562027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.562041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.562055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.562069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.562084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.562101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.562115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.768 [2024-07-13 00:56:56.562131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.768 [2024-07-13 00:56:56.562145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.562159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.562174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.562188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.562203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.562217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.562239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.768 [2024-07-13 00:56:56.562254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.768 [2024-07-13 00:56:56.562270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.768 [2024-07-13 00:56:56.562284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.768 [2024-07-13 00:56:56.562301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.768 [2024-07-13 00:56:56.562315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.768 [2024-07-13 00:56:56.562330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.768 [2024-07-13 00:56:56.562344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.768 [2024-07-13 00:56:56.562358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.768 [2024-07-13 00:56:56.562372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.768 [2024-07-13 00:56:56.562387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.768 [2024-07-13 00:56:56.562395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.768 [2024-07-13 00:56:56.562401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.769 [2024-07-13 00:56:56.562955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.562982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.769 [2024-07-13 00:56:56.562989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97088 len:8 PRP1 0x0 PRP2 0x0 00:31:59.769 [2024-07-13 00:56:56.562996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.563005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.769 [2024-07-13 00:56:56.563010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.769 [2024-07-13 00:56:56.563016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97096 len:8 PRP1 0x0 PRP2 0x0 00:31:59.769 [2024-07-13 00:56:56.563022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.769 [2024-07-13 00:56:56.563029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.769 [2024-07-13 00:56:56.563033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.769 [2024-07-13 00:56:56.563039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97104 len:8 PRP1 0x0 PRP2 0x0 00:31:59.769 [2024-07-13 00:56:56.563045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97112 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97120 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97128 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97136 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97144 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97152 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97160 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97168 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97176 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97184 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97192 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97200 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97208 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97216 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97224 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97240 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97248 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97256 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97264 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97272 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.770 [2024-07-13 00:56:56.563617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.770 [2024-07-13 00:56:56.563623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:31:59.770 [2024-07-13 00:56:56.563629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.770 [2024-07-13 00:56:56.563635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.563640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.563645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.563651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.563657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.563662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.563667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.563673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.563680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.563685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.563690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.563696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.563704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.563708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.563715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97336 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.563721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.563727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.563732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.563737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97344 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.563743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.563750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.563754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.563760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97352 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.563769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.563776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.563781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.563786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97360 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.563792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.563799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.563804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.563809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97368 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.563815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.573485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.573497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.573505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97376 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.573514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.573523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.573530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.573537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97384 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.573546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.573555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.573561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.573568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97392 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.573576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.573586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.573592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.573600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97400 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.573609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.573618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.573624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.573631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97408 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.573639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.573648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.573655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.573664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97416 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.573673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.573681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.573688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.573695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97424 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.573703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.573712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.573719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.573726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97432 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.573734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.573743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.573749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.573756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97440 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.573765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.573773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.771 [2024-07-13 00:56:56.573780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.771 [2024-07-13 00:56:56.573787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97448 len:8 PRP1 0x0 PRP2 0x0 00:31:59.771 [2024-07-13 00:56:56.573795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.573840] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15e65c0 was disconnected and freed. reset controller. 00:31:59.771 [2024-07-13 00:56:56.573850] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:59.771 [2024-07-13 00:56:56.573877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.771 [2024-07-13 00:56:56.573887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.573897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.771 [2024-07-13 00:56:56.573906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.573915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.771 [2024-07-13 00:56:56.573924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.573934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.771 [2024-07-13 00:56:56.573942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:56:56.573951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:59.771 [2024-07-13 00:56:56.573996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bffd0 (9): Bad file descriptor 00:31:59.771 [2024-07-13 00:56:56.577848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:59.771 [2024-07-13 00:56:56.646793] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:59.771 [2024-07-13 00:57:00.164924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.771 [2024-07-13 00:57:00.164959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.771 [2024-07-13 00:57:00.164968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.772 [2024-07-13 00:57:00.164976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.164983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.772 [2024-07-13 00:57:00.164990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.164997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.772 [2024-07-13 00:57:00.165003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bffd0 is same with the state(5) to be set 00:31:59.772 [2024-07-13 00:57:00.165042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.772 [2024-07-13 00:57:00.165567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.772 [2024-07-13 00:57:00.165575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.165990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.165998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.166004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.166013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.166019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.166027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.166033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.166041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.166048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.166056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.166062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.166071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.166078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.166088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.166094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.166103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.166110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.166118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.166124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.166132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.773 [2024-07-13 00:57:00.166139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.166148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.773 [2024-07-13 00:57:00.166154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.166162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.773 [2024-07-13 00:57:00.166169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.166177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.773 [2024-07-13 00:57:00.166184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.773 [2024-07-13 00:57:00.166192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.773 [2024-07-13 00:57:00.166198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.774 [2024-07-13 00:57:00.166261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.774 [2024-07-13 00:57:00.166278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.774 [2024-07-13 00:57:00.166293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.774 [2024-07-13 00:57:00.166308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.774 [2024-07-13 00:57:00.166322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.774 [2024-07-13 00:57:00.166337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.774 [2024-07-13 00:57:00.166352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.774 [2024-07-13 00:57:00.166367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.774 [2024-07-13 00:57:00.166382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.774 [2024-07-13 00:57:00.166396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.774 [2024-07-13 00:57:00.166410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.774 [2024-07-13 00:57:00.166425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.774 [2024-07-13 00:57:00.166439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.774 [2024-07-13 00:57:00.166808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.774 [2024-07-13 00:57:00.166814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:00.166822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:00.166830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:00.166838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:00.166844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:00.166852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:00.166859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:00.166866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:00.166872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:00.166887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:00.166893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:00.166901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:00.166908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:00.166915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:00.166922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:00.166945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.775 [2024-07-13 00:57:00.166951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.775 [2024-07-13 00:57:00.166957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40264 len:8 PRP1 0x0 PRP2 0x0 00:31:59.775 [2024-07-13 00:57:00.166963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:00.167005] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x178ae00 was disconnected and freed. reset controller. 00:31:59.775 [2024-07-13 00:57:00.167013] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:59.775 [2024-07-13 00:57:00.167020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:59.775 [2024-07-13 00:57:00.169860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:59.775 [2024-07-13 00:57:00.169887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bffd0 (9): Bad file descriptor 00:31:59.775 [2024-07-13 00:57:00.198899] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:59.775 [2024-07-13 00:57:04.568865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.568903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.568919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.568927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.568940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.568948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.568957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.568964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.568972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.568978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.568987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.568993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.569008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.569022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.569037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.569051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.569067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.569081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.569095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.569110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.569129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.569144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.569160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.569175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.569189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.569205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.775 [2024-07-13 00:57:04.569219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:04.569240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:04.569255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:04.569270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:04.569284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:04.569298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:04.569313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:04.569328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:04.569343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:04.569357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.775 [2024-07-13 00:57:04.569371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.775 [2024-07-13 00:57:04.569379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.776 [2024-07-13 00:57:04.569543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.776 [2024-07-13 00:57:04.569562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.776 [2024-07-13 00:57:04.569577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.776 [2024-07-13 00:57:04.569912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.776 [2024-07-13 00:57:04.569920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.777 [2024-07-13 00:57:04.569927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.569934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.777 [2024-07-13 00:57:04.569941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.569949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.777 [2024-07-13 00:57:04.569955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.569963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.777 [2024-07-13 00:57:04.569969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.569977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.777 [2024-07-13 00:57:04.569983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.569991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.777 [2024-07-13 00:57:04.569998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.777 [2024-07-13 00:57:04.570012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.777 [2024-07-13 00:57:04.570026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.777 [2024-07-13 00:57:04.570040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48592 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48600 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48608 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48616 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48624 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48632 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48640 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48648 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48656 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48664 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48672 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48680 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48688 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48696 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48704 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48712 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48720 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48728 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48736 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.777 [2024-07-13 00:57:04.570523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48744 len:8 PRP1 0x0 PRP2 0x0 00:31:59.777 [2024-07-13 00:57:04.570529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.777 [2024-07-13 00:57:04.570536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.777 [2024-07-13 00:57:04.570540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48752 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48760 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48768 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48776 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48784 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48792 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48800 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48808 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48816 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48824 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48832 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48840 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48848 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48856 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48864 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48872 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.570902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.570909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.570914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.570919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48880 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.581595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.581609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.581618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.581625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48888 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.581634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.581643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.581649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.581656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48896 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.581665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.581674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.581683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.581690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48904 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.581699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.581708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.581714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.581723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48912 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.581731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.581740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.581747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.581754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48920 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.581763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.581772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.581779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.581786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48928 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.581795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.581804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.581810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.581817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48936 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.581826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.581835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.778 [2024-07-13 00:57:04.581841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.778 [2024-07-13 00:57:04.581849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48112 len:8 PRP1 0x0 PRP2 0x0 00:31:59.778 [2024-07-13 00:57:04.581857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.778 [2024-07-13 00:57:04.581866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.779 [2024-07-13 00:57:04.581872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.779 [2024-07-13 00:57:04.581879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48120 len:8 PRP1 0x0 PRP2 0x0 00:31:59.779 [2024-07-13 00:57:04.581888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.779 [2024-07-13 00:57:04.581896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.779 [2024-07-13 00:57:04.581903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.779 [2024-07-13 00:57:04.581910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48128 len:8 PRP1 0x0 PRP2 0x0 00:31:59.779 [2024-07-13 00:57:04.581919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.779 [2024-07-13 00:57:04.581930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.779 [2024-07-13 00:57:04.581936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.779 [2024-07-13 00:57:04.581944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48136 len:8 PRP1 0x0 PRP2 0x0 00:31:59.779 [2024-07-13 00:57:04.581952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.779 [2024-07-13 00:57:04.581961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.779 [2024-07-13 00:57:04.581968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.779 [2024-07-13 00:57:04.581975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48144 len:8 PRP1 0x0 PRP2 0x0 00:31:59.779 [2024-07-13 00:57:04.581984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.779 [2024-07-13 00:57:04.581992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.779 [2024-07-13 00:57:04.581999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.779 [2024-07-13 00:57:04.582006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48152 len:8 PRP1 0x0 PRP2 0x0 00:31:59.779 [2024-07-13 00:57:04.582015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.779 [2024-07-13 00:57:04.582024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:59.779 [2024-07-13 00:57:04.582030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:59.779 [2024-07-13 00:57:04.582037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48160 len:8 PRP1 0x0 PRP2 0x0 00:31:59.779 [2024-07-13 00:57:04.582046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.779 [2024-07-13 00:57:04.582092] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x178abf0 was disconnected and freed. reset controller. 00:31:59.779 [2024-07-13 00:57:04.582102] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:59.779 [2024-07-13 00:57:04.582128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.779 [2024-07-13 00:57:04.582138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.779 [2024-07-13 00:57:04.582148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.779 [2024-07-13 00:57:04.582157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.779 [2024-07-13 00:57:04.582166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.779 [2024-07-13 00:57:04.582175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.779 [2024-07-13 00:57:04.582185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.779 [2024-07-13 00:57:04.582193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.779 [2024-07-13 00:57:04.582202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:59.779 [2024-07-13 00:57:04.582243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bffd0 (9): Bad file descriptor 00:31:59.779 [2024-07-13 00:57:04.586108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:59.779 [2024-07-13 00:57:04.619189] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:59.779 00:31:59.779 Latency(us) 00:31:59.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.779 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:59.779 Verification LBA range: start 0x0 length 0x4000 00:31:59.779 NVMe0n1 : 15.00 11060.95 43.21 373.02 0.00 11171.50 420.29 21769.35 00:31:59.779 =================================================================================================================== 00:31:59.779 Total : 11060.95 43.21 373.02 0.00 11171.50 420.29 21769.35 00:31:59.779 Received shutdown signal, test time was about 15.000000 seconds 00:31:59.779 00:31:59.779 Latency(us) 00:31:59.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.779 =================================================================================================================== 00:31:59.779 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:59.779 00:57:10 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:59.779 00:57:10 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:59.779 00:57:10 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:59.779 00:57:10 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1561466 00:31:59.779 00:57:10 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:59.779 00:57:10 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1561466 /var/tmp/bdevperf.sock 00:31:59.779 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1561466 ']' 00:31:59.779 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:59.779 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:59.779 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:59.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:59.779 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:59.779 00:57:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:59.779 00:57:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:59.779 00:57:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:59.779 00:57:11 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:59.779 [2024-07-13 00:57:11.158325] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:59.779 00:57:11 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:00.038 [2024-07-13 00:57:11.338878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:00.038 00:57:11 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:00.038 NVMe0n1 00:32:00.297 00:57:11 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:00.555 00:32:00.555 00:57:12 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:01.123 00:32:01.123 00:57:12 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:01.123 00:57:12 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:01.123 00:57:12 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:01.381 00:57:12 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:04.668 00:57:15 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:04.668 00:57:15 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:04.668 00:57:15 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1562324 00:32:04.668 00:57:15 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:04.668 00:57:15 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1562324 00:32:05.639 0 00:32:05.639 00:57:17 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:05.639 [2024-07-13 00:57:10.807024] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:05.639 [2024-07-13 00:57:10.807074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1561466 ] 00:32:05.639 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.639 [2024-07-13 00:57:10.873623] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.639 [2024-07-13 00:57:10.910590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.639 [2024-07-13 00:57:12.764427] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:05.639 [2024-07-13 00:57:12.764473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.639 [2024-07-13 00:57:12.764485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.639 [2024-07-13 00:57:12.764493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.639 [2024-07-13 00:57:12.764499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.639 [2024-07-13 00:57:12.764506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.639 [2024-07-13 00:57:12.764513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.639 [2024-07-13 00:57:12.764519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.639 [2024-07-13 00:57:12.764526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.639 [2024-07-13 00:57:12.764532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:05.639 [2024-07-13 00:57:12.764558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:05.639 [2024-07-13 00:57:12.764572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f39fd0 (9): Bad file descriptor 00:32:05.639 [2024-07-13 00:57:12.775088] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:05.639 Running I/O for 1 seconds... 00:32:05.639 00:32:05.639 Latency(us) 00:32:05.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.639 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:05.639 Verification LBA range: start 0x0 length 0x4000 00:32:05.639 NVMe0n1 : 1.01 10850.00 42.38 0.00 0.00 11746.83 2421.98 10371.78 00:32:05.639 =================================================================================================================== 00:32:05.639 Total : 10850.00 42.38 0.00 0.00 11746.83 2421.98 10371.78 00:32:05.639 00:57:17 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:05.639 00:57:17 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:05.898 00:57:17 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:06.156 00:57:17 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:06.156 00:57:17 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:06.156 00:57:17 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:06.415 00:57:17 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:09.704 00:57:20 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:09.704 00:57:20 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:09.704 00:57:21 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1561466 00:32:09.704 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1561466 ']' 00:32:09.704 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1561466 00:32:09.704 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:32:09.704 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:09.704 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1561466 00:32:09.704 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:09.704 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:09.704 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1561466' 00:32:09.704 killing process with pid 1561466 00:32:09.704 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1561466 00:32:09.704 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1561466 00:32:09.704 00:57:21 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:09.962 00:57:21 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:09.962 00:57:21 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:09.962 00:57:21 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:09.962 00:57:21 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:09.962 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:09.962 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:32:09.962 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:09.962 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:32:09.962 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:09.963 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:09.963 rmmod nvme_tcp 00:32:09.963 rmmod nvme_fabrics 00:32:09.963 rmmod nvme_keyring 00:32:09.963 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:09.963 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:32:09.963 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:32:09.963 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1557914 ']' 00:32:09.963 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1557914 00:32:09.963 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1557914 ']' 00:32:09.963 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1557914 00:32:09.963 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:32:09.963 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:09.963 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1557914 00:32:10.222 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:10.222 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:10.222 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1557914' 00:32:10.222 killing process with pid 1557914 00:32:10.222 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1557914 00:32:10.222 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1557914 00:32:10.222 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:10.222 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:10.222 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:10.222 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:10.222 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:10.222 00:57:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.222 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:10.222 00:57:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.757 00:57:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:12.757 00:32:12.757 real 0m38.053s 00:32:12.757 user 2m1.426s 00:32:12.757 sys 0m7.595s 00:32:12.757 00:57:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:12.757 00:57:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:12.757 ************************************ 00:32:12.757 END TEST nvmf_failover 00:32:12.757 ************************************ 00:32:12.757 00:57:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:12.757 00:57:23 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:12.757 00:57:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:12.757 00:57:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:12.757 00:57:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:12.757 ************************************ 00:32:12.757 START TEST nvmf_host_discovery 00:32:12.757 ************************************ 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:12.757 * Looking for test storage... 00:32:12.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:12.757 00:57:23 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.758 00:57:23 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.758 00:57:23 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.758 00:57:23 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:12.758 00:57:23 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.758 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:32:12.758 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:12.758 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:12.758 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:12.758 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:12.758 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:12.758 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:12.758 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:12.758 00:57:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:32:12.758 00:57:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.035 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:18.035 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:32:18.035 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:18.035 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:18.036 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:18.036 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:18.036 Found net devices under 0000:86:00.0: cvl_0_0 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:18.036 Found net devices under 0000:86:00.1: cvl_0_1 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:18.036 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:18.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:18.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:32:18.295 00:32:18.295 --- 10.0.0.2 ping statistics --- 00:32:18.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.295 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:18.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:18.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:32:18.295 00:32:18.295 --- 10.0.0.1 ping statistics --- 00:32:18.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:18.295 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1566607 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1566607 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1566607 ']' 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:18.295 00:57:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.295 [2024-07-13 00:57:29.825623] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:18.295 [2024-07-13 00:57:29.825674] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:18.295 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.554 [2024-07-13 00:57:29.896375] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.554 [2024-07-13 00:57:29.935403] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:18.554 [2024-07-13 00:57:29.935442] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:18.554 [2024-07-13 00:57:29.935450] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:18.554 [2024-07-13 00:57:29.935456] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:18.554 [2024-07-13 00:57:29.935462] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:18.554 [2024-07-13 00:57:29.935497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.554 [2024-07-13 00:57:30.066438] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.554 [2024-07-13 00:57:30.078577] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.554 null0 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.554 null1 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1566631 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1566631 /tmp/host.sock 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1566631 ']' 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:18.554 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:18.554 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.813 [2024-07-13 00:57:30.153671] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:18.813 [2024-07-13 00:57:30.153713] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566631 ] 00:32:18.813 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.813 [2024-07-13 00:57:30.219694] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.813 [2024-07-13 00:57:30.260826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.813 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:19.073 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.333 [2024-07-13 00:57:30.684170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:19.333 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.593 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:32:19.593 00:57:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:32:20.161 [2024-07-13 00:57:31.414399] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:20.161 [2024-07-13 00:57:31.414419] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:20.161 [2024-07-13 00:57:31.414432] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:20.161 [2024-07-13 00:57:31.500698] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:20.161 [2024-07-13 00:57:31.720087] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:20.161 [2024-07-13 00:57:31.720107] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:20.421 00:57:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.681 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.682 [2024-07-13 00:57:32.196290] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:20.682 [2024-07-13 00:57:32.196796] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:20.682 [2024-07-13 00:57:32.196817] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:20.682 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:20.942 [2024-07-13 00:57:32.283070] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.942 [2024-07-13 00:57:32.341600] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:20.942 [2024-07-13 00:57:32.341617] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:20.942 [2024-07-13 00:57:32.341622] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:20.942 00:57:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:21.881 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.141 [2024-07-13 00:57:33.459932] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:22.141 [2024-07-13 00:57:33.459954] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:22.141 [2024-07-13 00:57:33.462145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:22.141 [2024-07-13 00:57:33.462161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.141 [2024-07-13 00:57:33.462170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:22.141 [2024-07-13 00:57:33.462177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.141 [2024-07-13 00:57:33.462184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:22.141 [2024-07-13 00:57:33.462191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.141 [2024-07-13 00:57:33.462198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:22.141 [2024-07-13 00:57:33.462204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.141 [2024-07-13 00:57:33.462210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d73a90 is same with the state(5) to be set 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:22.141 [2024-07-13 00:57:33.472158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d73a90 (9): Bad file descriptor 00:32:22.141 [2024-07-13 00:57:33.482195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:22.141 [2024-07-13 00:57:33.482348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.141 [2024-07-13 00:57:33.482362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d73a90 with addr=10.0.0.2, port=4420 00:32:22.141 [2024-07-13 00:57:33.482369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d73a90 is same with the state(5) to be set 00:32:22.141 [2024-07-13 00:57:33.482380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d73a90 (9): Bad file descriptor 00:32:22.141 [2024-07-13 00:57:33.482390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:22.141 [2024-07-13 00:57:33.482396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:22.141 [2024-07-13 00:57:33.482404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:22.141 [2024-07-13 00:57:33.482414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.141 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.141 [2024-07-13 00:57:33.492248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:22.141 [2024-07-13 00:57:33.492487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.141 [2024-07-13 00:57:33.492499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d73a90 with addr=10.0.0.2, port=4420 00:32:22.141 [2024-07-13 00:57:33.492506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d73a90 is same with the state(5) to be set 00:32:22.141 [2024-07-13 00:57:33.492516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d73a90 (9): Bad file descriptor 00:32:22.141 [2024-07-13 00:57:33.492526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:22.141 [2024-07-13 00:57:33.492532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:22.141 [2024-07-13 00:57:33.492539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:22.141 [2024-07-13 00:57:33.492548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.141 [2024-07-13 00:57:33.502299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:22.141 [2024-07-13 00:57:33.502419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.142 [2024-07-13 00:57:33.502432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d73a90 with addr=10.0.0.2, port=4420 00:32:22.142 [2024-07-13 00:57:33.502438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d73a90 is same with the state(5) to be set 00:32:22.142 [2024-07-13 00:57:33.502448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d73a90 (9): Bad file descriptor 00:32:22.142 [2024-07-13 00:57:33.502457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:22.142 [2024-07-13 00:57:33.502467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:22.142 [2024-07-13 00:57:33.502474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:22.142 [2024-07-13 00:57:33.502483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.142 [2024-07-13 00:57:33.512351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:22.142 [2024-07-13 00:57:33.512527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.142 [2024-07-13 00:57:33.512538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d73a90 with addr=10.0.0.2, port=4420 00:32:22.142 [2024-07-13 00:57:33.512545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d73a90 is same with the state(5) to be set 00:32:22.142 [2024-07-13 00:57:33.512555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d73a90 (9): Bad file descriptor 00:32:22.142 [2024-07-13 00:57:33.512565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:22.142 [2024-07-13 00:57:33.512570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:22.142 [2024-07-13 00:57:33.512577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:22.142 [2024-07-13 00:57:33.512586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:32:22.142 [2024-07-13 00:57:33.522403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:22.142 [2024-07-13 00:57:33.522532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.142 [2024-07-13 00:57:33.522543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d73a90 with addr=10.0.0.2, port=4420 00:32:22.142 [2024-07-13 00:57:33.522549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d73a90 is same with the state(5) to be set 00:32:22.142 [2024-07-13 00:57:33.522559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d73a90 (9): Bad file descriptor 00:32:22.142 [2024-07-13 00:57:33.522568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:22.142 [2024-07-13 00:57:33.522574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:22.142 [2024-07-13 00:57:33.522580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:22.142 [2024-07-13 00:57:33.522589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:22.142 [2024-07-13 00:57:33.532453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:22.142 [2024-07-13 00:57:33.532672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.142 [2024-07-13 00:57:33.532684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d73a90 with addr=10.0.0.2, port=4420 00:32:22.142 [2024-07-13 00:57:33.532691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d73a90 is same with the state(5) to be set 00:32:22.142 [2024-07-13 00:57:33.532702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d73a90 (9): Bad file descriptor 00:32:22.142 [2024-07-13 00:57:33.532717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:22.142 [2024-07-13 00:57:33.532724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:22.142 [2024-07-13 00:57:33.532731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:22.142 [2024-07-13 00:57:33.532740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.142 [2024-07-13 00:57:33.542505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:22.142 [2024-07-13 00:57:33.542619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.142 [2024-07-13 00:57:33.542630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d73a90 with addr=10.0.0.2, port=4420 00:32:22.142 [2024-07-13 00:57:33.542637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d73a90 is same with the state(5) to be set 00:32:22.142 [2024-07-13 00:57:33.542648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d73a90 (9): Bad file descriptor 00:32:22.142 [2024-07-13 00:57:33.542657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:22.142 [2024-07-13 00:57:33.542663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:22.142 [2024-07-13 00:57:33.542669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:22.142 [2024-07-13 00:57:33.542678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:22.142 [2024-07-13 00:57:33.545759] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:22.142 [2024-07-13 00:57:33.545775] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:22.142 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.402 00:57:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.781 [2024-07-13 00:57:34.903305] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:23.781 [2024-07-13 00:57:34.903323] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:23.781 [2024-07-13 00:57:34.903337] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:23.781 [2024-07-13 00:57:34.989597] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:23.781 [2024-07-13 00:57:35.049717] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:23.781 [2024-07-13 00:57:35.049742] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.781 request: 00:32:23.781 { 00:32:23.781 "name": "nvme", 00:32:23.781 "trtype": "tcp", 00:32:23.781 "traddr": "10.0.0.2", 00:32:23.781 "adrfam": "ipv4", 00:32:23.781 "trsvcid": "8009", 00:32:23.781 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:23.781 "wait_for_attach": true, 00:32:23.781 "method": "bdev_nvme_start_discovery", 00:32:23.781 "req_id": 1 00:32:23.781 } 00:32:23.781 Got JSON-RPC error response 00:32:23.781 response: 00:32:23.781 { 00:32:23.781 "code": -17, 00:32:23.781 "message": "File exists" 00:32:23.781 } 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.781 request: 00:32:23.781 { 00:32:23.781 "name": "nvme_second", 00:32:23.781 "trtype": "tcp", 00:32:23.781 "traddr": "10.0.0.2", 00:32:23.781 "adrfam": "ipv4", 00:32:23.781 "trsvcid": "8009", 00:32:23.781 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:23.781 "wait_for_attach": true, 00:32:23.781 "method": "bdev_nvme_start_discovery", 00:32:23.781 "req_id": 1 00:32:23.781 } 00:32:23.781 Got JSON-RPC error response 00:32:23.781 response: 00:32:23.781 { 00:32:23.781 "code": -17, 00:32:23.781 "message": "File exists" 00:32:23.781 } 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.781 00:57:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:25.160 [2024-07-13 00:57:36.290125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.160 [2024-07-13 00:57:36.290153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1db1970 with addr=10.0.0.2, port=8010 00:32:25.160 [2024-07-13 00:57:36.290166] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:25.160 [2024-07-13 00:57:36.290172] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:25.160 [2024-07-13 00:57:36.290179] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:26.098 [2024-07-13 00:57:37.292571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:26.098 [2024-07-13 00:57:37.292594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d72030 with addr=10.0.0.2, port=8010 00:32:26.098 [2024-07-13 00:57:37.292604] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:26.098 [2024-07-13 00:57:37.292610] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:26.098 [2024-07-13 00:57:37.292616] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:27.036 [2024-07-13 00:57:38.294744] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:27.036 request: 00:32:27.036 { 00:32:27.036 "name": "nvme_second", 00:32:27.036 "trtype": "tcp", 00:32:27.036 "traddr": "10.0.0.2", 00:32:27.036 "adrfam": "ipv4", 00:32:27.036 "trsvcid": "8010", 00:32:27.036 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:27.036 "wait_for_attach": false, 00:32:27.036 "attach_timeout_ms": 3000, 00:32:27.036 "method": "bdev_nvme_start_discovery", 00:32:27.036 "req_id": 1 00:32:27.036 } 00:32:27.036 Got JSON-RPC error response 00:32:27.036 response: 00:32:27.036 { 00:32:27.036 "code": -110, 00:32:27.036 "message": "Connection timed out" 00:32:27.036 } 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1566631 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:27.036 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:27.036 rmmod nvme_tcp 00:32:27.036 rmmod nvme_fabrics 00:32:27.037 rmmod nvme_keyring 00:32:27.037 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:27.037 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:32:27.037 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:32:27.037 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1566607 ']' 00:32:27.037 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1566607 00:32:27.037 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1566607 ']' 00:32:27.037 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1566607 00:32:27.037 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:32:27.037 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:27.037 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1566607 00:32:27.037 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:27.037 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:27.037 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1566607' 00:32:27.037 killing process with pid 1566607 00:32:27.037 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1566607 00:32:27.037 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1566607 00:32:27.296 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:27.296 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:27.296 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:27.296 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:27.296 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:27.296 00:57:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.296 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:27.296 00:57:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.263 00:57:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:29.263 00:32:29.263 real 0m16.805s 00:32:29.263 user 0m20.150s 00:32:29.263 sys 0m5.680s 00:32:29.263 00:57:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:29.263 00:57:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:29.263 ************************************ 00:32:29.263 END TEST nvmf_host_discovery 00:32:29.263 ************************************ 00:32:29.263 00:57:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:29.263 00:57:40 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:29.263 00:57:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:29.263 00:57:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:29.263 00:57:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:29.263 ************************************ 00:32:29.263 START TEST nvmf_host_multipath_status 00:32:29.263 ************************************ 00:32:29.263 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:29.522 * Looking for test storage... 00:32:29.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:29.522 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:29.522 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:29.522 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:29.522 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:29.522 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:29.522 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:29.522 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:32:29.523 00:57:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.794 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:34.795 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:34.795 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:34.795 Found net devices under 0000:86:00.0: cvl_0_0 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:34.795 Found net devices under 0000:86:00.1: cvl_0_1 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:34.795 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:35.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:35.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:32:35.055 00:32:35.055 --- 10.0.0.2 ping statistics --- 00:32:35.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.055 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:35.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:35.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:32:35.055 00:32:35.055 --- 10.0.0.1 ping statistics --- 00:32:35.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.055 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:35.055 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1571681 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1571681 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1571681 ']' 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:35.314 [2024-07-13 00:57:46.674419] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:35.314 [2024-07-13 00:57:46.674460] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:35.314 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.314 [2024-07-13 00:57:46.745143] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:35.314 [2024-07-13 00:57:46.785856] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:35.314 [2024-07-13 00:57:46.785895] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:35.314 [2024-07-13 00:57:46.785902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:35.314 [2024-07-13 00:57:46.785909] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:35.314 [2024-07-13 00:57:46.785914] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:35.314 [2024-07-13 00:57:46.785989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.314 [2024-07-13 00:57:46.785989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:35.314 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:35.574 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:35.574 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1571681 00:32:35.574 00:57:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:35.574 [2024-07-13 00:57:47.058638] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:35.574 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:35.834 Malloc0 00:32:35.834 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:36.093 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:36.093 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:36.352 [2024-07-13 00:57:47.806609] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:36.352 00:57:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:36.612 [2024-07-13 00:57:47.999086] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:36.612 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1571910 00:32:36.612 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:36.612 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:36.612 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1571910 /var/tmp/bdevperf.sock 00:32:36.612 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1571910 ']' 00:32:36.612 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:36.612 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:36.612 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:36.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:36.612 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:36.612 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:36.872 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:36.872 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:32:36.872 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:37.131 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:32:37.390 Nvme0n1 00:32:37.390 00:57:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:37.958 Nvme0n1 00:32:37.958 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:37.958 00:57:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:39.865 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:39.865 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:40.124 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:40.124 00:57:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:41.501 00:57:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:41.501 00:57:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:41.501 00:57:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.501 00:57:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:41.501 00:57:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.501 00:57:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:41.501 00:57:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.501 00:57:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:41.501 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:41.501 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:41.501 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.501 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:41.759 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.759 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:41.759 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.759 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:42.017 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:42.017 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:42.017 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.017 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:42.276 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:42.276 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:42.276 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.276 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:42.276 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:42.276 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:42.276 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:42.535 00:57:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:42.794 00:57:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:43.729 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:43.729 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:43.729 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.729 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:43.988 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:43.988 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:43.988 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:43.988 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:44.246 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.246 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:44.246 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.246 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:44.246 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.246 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:44.246 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.246 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:44.505 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.505 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:44.505 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.505 00:57:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:44.764 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.764 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:44.764 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.764 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:44.764 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.764 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:44.764 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:45.022 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:45.281 00:57:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:46.215 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:46.215 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:46.215 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.215 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:46.474 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.474 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:46.474 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.474 00:57:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:46.733 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:46.733 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:46.733 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.733 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:46.733 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.733 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:46.733 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.733 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:46.991 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.991 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:46.991 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.991 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:47.250 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:47.250 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:47.250 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:47.250 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:47.509 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:47.509 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:47.509 00:57:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:47.509 00:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:47.768 00:57:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:48.703 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:48.703 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:48.703 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.703 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:48.961 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.961 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:48.961 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.961 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:49.253 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:49.253 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:49.253 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.253 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:49.253 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.253 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:49.253 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.253 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:49.512 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.512 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:49.512 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.512 00:58:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:49.771 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.771 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:49.771 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.771 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:50.029 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:50.029 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:50.029 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:50.029 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:50.287 00:58:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:51.223 00:58:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:51.223 00:58:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:51.223 00:58:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.223 00:58:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:51.482 00:58:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:51.482 00:58:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:51.482 00:58:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:51.482 00:58:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.740 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:51.740 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:51.740 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.740 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:51.740 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.740 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:51.740 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.740 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:51.998 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.998 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:51.998 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.998 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:52.257 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:52.257 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:52.257 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:52.257 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:52.516 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:52.516 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:52.516 00:58:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:52.516 00:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:52.775 00:58:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:53.710 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:53.710 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:53.710 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.710 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:53.969 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:53.969 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:53.969 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.969 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:54.228 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.228 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:54.228 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:54.228 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.486 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.486 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:54.486 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.486 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:54.486 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.487 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:54.487 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.487 00:58:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:54.745 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:54.745 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:54.745 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.745 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:55.003 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.003 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:55.003 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:55.003 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:55.262 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:55.521 00:58:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:56.457 00:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:56.457 00:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:56.457 00:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.457 00:58:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:56.717 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.717 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:56.717 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.717 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:56.976 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.976 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:56.976 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.976 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:56.976 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.976 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:56.976 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.976 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:57.235 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.235 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:57.235 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.235 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:57.494 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.494 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:57.494 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.494 00:58:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:57.752 00:58:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.752 00:58:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:57.752 00:58:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:58.011 00:58:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:58.011 00:58:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:58.949 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:58.949 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:59.208 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.208 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:59.208 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:59.208 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:59.208 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.208 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:59.468 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.468 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:59.468 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.468 00:58:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:59.727 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.727 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:59.727 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.727 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:59.727 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.727 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:59.985 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.985 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:59.985 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.985 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:59.985 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:59.985 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.244 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.244 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:00.244 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:00.503 00:58:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:00.762 00:58:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:01.696 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:01.696 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:01.696 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.696 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:01.954 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.954 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:01.954 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.954 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:01.954 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.954 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:01.954 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.954 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:02.212 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.212 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:02.212 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.212 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:02.470 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.470 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:02.470 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.470 00:58:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:02.728 00:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.728 00:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:02.728 00:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:02.728 00:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.728 00:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.728 00:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:02.728 00:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:02.987 00:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:03.244 00:58:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:04.181 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:04.181 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:04.181 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.181 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:04.440 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.440 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:04.440 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.440 00:58:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:04.699 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:04.699 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:04.699 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.699 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:04.699 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.699 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:04.958 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.958 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:04.958 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.958 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:04.958 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.958 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:05.217 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.217 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:05.217 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.217 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:05.476 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:05.476 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1571910 00:33:05.476 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1571910 ']' 00:33:05.476 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1571910 00:33:05.476 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:33:05.476 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:05.476 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1571910 00:33:05.476 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:33:05.476 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:33:05.476 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1571910' 00:33:05.476 killing process with pid 1571910 00:33:05.476 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1571910 00:33:05.476 00:58:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1571910 00:33:05.476 Connection closed with partial response: 00:33:05.476 00:33:05.476 00:33:05.740 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1571910 00:33:05.741 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:05.741 [2024-07-13 00:57:48.073319] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:05.741 [2024-07-13 00:57:48.073371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571910 ] 00:33:05.741 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.741 [2024-07-13 00:57:48.141111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.741 [2024-07-13 00:57:48.181415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:05.741 Running I/O for 90 seconds... 00:33:05.741 [2024-07-13 00:58:01.532727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.532765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.532799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.532807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.532821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.532829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.532841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.532848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.532860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.532867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.532880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.532886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.532898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.532905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.532917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.532923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.532935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.532942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.532954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.532961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.532973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.532986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.532999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.533005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.533024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.533044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.533064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.533083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.533102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.533173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.533195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.533215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.533240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.533260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.741 [2024-07-13 00:58:01.533281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.741 [2024-07-13 00:58:01.533305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:33056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.741 [2024-07-13 00:58:01.533325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.741 [2024-07-13 00:58:01.533345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.741 [2024-07-13 00:58:01.533364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.741 [2024-07-13 00:58:01.533384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.741 [2024-07-13 00:58:01.533404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.741 [2024-07-13 00:58:01.533426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.741 [2024-07-13 00:58:01.533445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.741 [2024-07-13 00:58:01.533465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.741 [2024-07-13 00:58:01.533485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.741 [2024-07-13 00:58:01.533506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:33136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.741 [2024-07-13 00:58:01.533526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.741 [2024-07-13 00:58:01.533548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:33152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.741 [2024-07-13 00:58:01.533568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.741 [2024-07-13 00:58:01.533588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:05.741 [2024-07-13 00:58:01.533601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:33248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:33256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:33272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.533986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.533999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:05.742 [2024-07-13 00:58:01.534564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.742 [2024-07-13 00:58:01.534571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.534587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:33504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.534594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.534611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.534617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.534634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:33520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.534640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.534657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.534664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.534681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.534688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.534704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.534711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.534727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.534734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.534750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:33560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.534757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.534773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:33568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.534780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.534797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.534803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.534820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.534826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.534843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.534850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.534865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.534872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.534888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.534895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.743 [2024-07-13 00:58:01.535335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.743 [2024-07-13 00:58:01.535360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:33688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:33728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:01.535748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.743 [2024-07-13 00:58:01.535754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:14.639142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.743 [2024-07-13 00:58:14.639181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:14.639217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.743 [2024-07-13 00:58:14.639231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:14.639244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.743 [2024-07-13 00:58:14.639251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:14.639264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.743 [2024-07-13 00:58:14.639270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:14.639283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.743 [2024-07-13 00:58:14.639290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:14.639302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.743 [2024-07-13 00:58:14.639309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:14.639321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.743 [2024-07-13 00:58:14.639328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:14.639340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.743 [2024-07-13 00:58:14.639347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:14.639359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.743 [2024-07-13 00:58:14.639370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:05.743 [2024-07-13 00:58:14.639383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.639390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.639402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.639408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.639421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.639428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.639440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.639447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.639459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.744 [2024-07-13 00:58:14.639466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.744 [2024-07-13 00:58:14.640593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.640987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.640999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.641006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.641024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.641043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.641062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.641080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.641099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.744 [2024-07-13 00:58:14.641118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.641690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.641712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.641731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.641753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.641772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.641790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.641809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.641827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.744 [2024-07-13 00:58:14.641846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:05.744 [2024-07-13 00:58:14.641858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.745 [2024-07-13 00:58:14.641864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:05.745 [2024-07-13 00:58:14.641877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.745 [2024-07-13 00:58:14.641883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.745 [2024-07-13 00:58:14.641896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.745 [2024-07-13 00:58:14.641902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.745 [2024-07-13 00:58:14.641914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:05.745 [2024-07-13 00:58:14.641921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.745 Received shutdown signal, test time was about 27.461166 seconds 00:33:05.745 00:33:05.745 Latency(us) 00:33:05.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.745 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:05.745 Verification LBA range: start 0x0 length 0x4000 00:33:05.745 Nvme0n1 : 27.46 10299.23 40.23 0.00 0.00 12407.43 174.53 3019898.88 00:33:05.745 =================================================================================================================== 00:33:05.745 Total : 10299.23 40.23 0.00 0.00 12407.43 174.53 3019898.88 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:05.745 rmmod nvme_tcp 00:33:05.745 rmmod nvme_fabrics 00:33:05.745 rmmod nvme_keyring 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1571681 ']' 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1571681 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1571681 ']' 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1571681 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:05.745 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1571681 00:33:06.007 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:06.007 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:06.007 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1571681' 00:33:06.007 killing process with pid 1571681 00:33:06.007 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1571681 00:33:06.007 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1571681 00:33:06.007 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:06.007 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:06.007 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:06.007 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:06.007 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:06.007 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.007 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:06.007 00:58:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.610 00:58:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:08.610 00:33:08.610 real 0m38.816s 00:33:08.611 user 1m44.830s 00:33:08.611 sys 0m10.727s 00:33:08.611 00:58:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:08.611 00:58:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:08.611 ************************************ 00:33:08.611 END TEST nvmf_host_multipath_status 00:33:08.611 ************************************ 00:33:08.611 00:58:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:08.611 00:58:19 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:08.611 00:58:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:08.611 00:58:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:08.611 00:58:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:08.611 ************************************ 00:33:08.611 START TEST nvmf_discovery_remove_ifc 00:33:08.611 ************************************ 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:08.611 * Looking for test storage... 00:33:08.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:33:08.611 00:58:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:13.886 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:13.886 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:13.886 Found net devices under 0000:86:00.0: cvl_0_0 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:13.886 Found net devices under 0000:86:00.1: cvl_0_1 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:33:13.886 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:13.887 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:14.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:14.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:33:14.145 00:33:14.145 --- 10.0.0.2 ping statistics --- 00:33:14.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.145 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:14.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:14.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:33:14.145 00:33:14.145 --- 10.0.0.1 ping statistics --- 00:33:14.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.145 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1580189 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1580189 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1580189 ']' 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:14.145 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.145 [2024-07-13 00:58:25.593948] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:14.145 [2024-07-13 00:58:25.593996] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.145 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.145 [2024-07-13 00:58:25.666555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.403 [2024-07-13 00:58:25.706497] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.403 [2024-07-13 00:58:25.706534] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.403 [2024-07-13 00:58:25.706542] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:14.403 [2024-07-13 00:58:25.706548] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:14.403 [2024-07-13 00:58:25.706553] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.403 [2024-07-13 00:58:25.706591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.403 [2024-07-13 00:58:25.842558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.403 [2024-07-13 00:58:25.850687] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:14.403 null0 00:33:14.403 [2024-07-13 00:58:25.882685] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1580265 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1580265 /tmp/host.sock 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1580265 ']' 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:14.403 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:14.403 00:58:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.403 [2024-07-13 00:58:25.950848] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:14.403 [2024-07-13 00:58:25.950886] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580265 ] 00:33:14.662 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.662 [2024-07-13 00:58:26.019800] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.662 [2024-07-13 00:58:26.061187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.662 00:58:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:14.662 00:58:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:33:14.662 00:58:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:14.662 00:58:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:14.662 00:58:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.662 00:58:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.662 00:58:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.662 00:58:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:14.662 00:58:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.662 00:58:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.662 00:58:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.662 00:58:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:14.662 00:58:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.662 00:58:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:16.039 [2024-07-13 00:58:27.188623] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:16.039 [2024-07-13 00:58:27.188643] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:16.039 [2024-07-13 00:58:27.188654] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:16.039 [2024-07-13 00:58:27.274923] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:16.039 [2024-07-13 00:58:27.493069] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:16.039 [2024-07-13 00:58:27.493111] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:16.039 [2024-07-13 00:58:27.493132] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:16.039 [2024-07-13 00:58:27.493145] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:16.039 [2024-07-13 00:58:27.493165] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:16.039 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.039 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:16.039 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:16.039 [2024-07-13 00:58:27.498007] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6bb780 was disconnected and freed. delete nvme_qpair. 00:33:16.039 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:16.039 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:16.039 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.039 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:16.039 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:16.039 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:16.039 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.039 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:16.039 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:16.039 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:16.298 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:16.298 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:16.298 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:16.298 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:16.298 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.298 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:16.298 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:16.298 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:16.298 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.298 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:16.298 00:58:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:17.235 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:17.235 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.235 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:17.235 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.235 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:17.235 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:17.235 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:17.235 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.235 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:17.235 00:58:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:18.612 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:18.612 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.612 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:18.612 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.612 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:18.612 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.612 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:18.612 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.612 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:18.612 00:58:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:19.548 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:19.548 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:19.548 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:19.548 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.548 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:19.548 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:19.548 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:19.548 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.548 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:19.548 00:58:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:20.485 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:20.485 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:20.485 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:20.485 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.485 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:20.485 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.485 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:20.485 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.485 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:20.485 00:58:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:21.419 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:21.419 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:21.419 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:21.419 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.419 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:21.419 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:21.419 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:21.419 [2024-07-13 00:58:32.934595] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:21.419 [2024-07-13 00:58:32.934632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.419 [2024-07-13 00:58:32.934642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.419 [2024-07-13 00:58:32.934651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.419 [2024-07-13 00:58:32.934658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.419 [2024-07-13 00:58:32.934665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.419 [2024-07-13 00:58:32.934672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.419 [2024-07-13 00:58:32.934680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.419 [2024-07-13 00:58:32.934687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.419 [2024-07-13 00:58:32.934694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:21.419 [2024-07-13 00:58:32.934701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:21.419 [2024-07-13 00:58:32.934707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682110 is same with the state(5) to be set 00:33:21.419 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.419 [2024-07-13 00:58:32.944617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x682110 (9): Bad file descriptor 00:33:21.419 [2024-07-13 00:58:32.954655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:21.419 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:21.419 00:58:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:22.796 00:58:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:22.796 00:58:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:22.796 00:58:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:22.796 00:58:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.796 00:58:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:22.796 00:58:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:22.796 00:58:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:22.796 [2024-07-13 00:58:34.018263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:22.796 [2024-07-13 00:58:34.018339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x682110 with addr=10.0.0.2, port=4420 00:33:22.796 [2024-07-13 00:58:34.018368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x682110 is same with the state(5) to be set 00:33:22.796 [2024-07-13 00:58:34.018415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x682110 (9): Bad file descriptor 00:33:22.796 [2024-07-13 00:58:34.019337] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:22.796 [2024-07-13 00:58:34.019385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:22.796 [2024-07-13 00:58:34.019406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:22.796 [2024-07-13 00:58:34.019427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:22.796 [2024-07-13 00:58:34.019462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.796 [2024-07-13 00:58:34.019483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:22.796 00:58:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.796 00:58:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:22.796 00:58:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:23.734 [2024-07-13 00:58:35.021981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:23.734 [2024-07-13 00:58:35.022001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:23.734 [2024-07-13 00:58:35.022009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:23.734 [2024-07-13 00:58:35.022015] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:33:23.734 [2024-07-13 00:58:35.022042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.734 [2024-07-13 00:58:35.022059] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:23.734 [2024-07-13 00:58:35.022077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:23.734 [2024-07-13 00:58:35.022086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.734 [2024-07-13 00:58:35.022095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:23.734 [2024-07-13 00:58:35.022102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.734 [2024-07-13 00:58:35.022110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:23.734 [2024-07-13 00:58:35.022117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.734 [2024-07-13 00:58:35.022124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:23.734 [2024-07-13 00:58:35.022130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.734 [2024-07-13 00:58:35.022137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:23.734 [2024-07-13 00:58:35.022143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.734 [2024-07-13 00:58:35.022154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:23.734 [2024-07-13 00:58:35.022716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6815d0 (9): Bad file descriptor 00:33:23.734 [2024-07-13 00:58:35.023725] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:23.734 [2024-07-13 00:58:35.023735] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:23.734 00:58:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:24.672 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:24.931 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:24.931 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:24.931 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.931 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:24.931 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:24.931 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:24.931 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.931 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:24.931 00:58:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:25.869 [2024-07-13 00:58:37.074749] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:25.869 [2024-07-13 00:58:37.074766] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:25.869 [2024-07-13 00:58:37.074780] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:25.869 [2024-07-13 00:58:37.161038] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:25.869 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:25.869 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:25.869 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:25.869 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.869 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:25.869 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:25.869 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:25.869 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.869 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:25.869 00:58:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:25.869 [2024-07-13 00:58:37.337528] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:25.869 [2024-07-13 00:58:37.337564] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:25.869 [2024-07-13 00:58:37.337582] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:25.869 [2024-07-13 00:58:37.337595] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:25.869 [2024-07-13 00:58:37.337602] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:25.869 [2024-07-13 00:58:37.343412] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x68f7b0 was disconnected and freed. delete nvme_qpair. 00:33:26.806 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:26.806 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:26.806 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:26.806 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.806 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:26.806 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:26.806 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1580265 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1580265 ']' 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1580265 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1580265 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1580265' 00:33:27.065 killing process with pid 1580265 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1580265 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1580265 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:33:27.065 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:27.066 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:33:27.066 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:27.066 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:27.066 rmmod nvme_tcp 00:33:27.325 rmmod nvme_fabrics 00:33:27.325 rmmod nvme_keyring 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1580189 ']' 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1580189 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1580189 ']' 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1580189 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1580189 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1580189' 00:33:27.325 killing process with pid 1580189 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1580189 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1580189 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:27.325 00:58:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.875 00:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:29.875 00:33:29.875 real 0m21.290s 00:33:29.875 user 0m26.769s 00:33:29.875 sys 0m5.677s 00:33:29.875 00:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:29.875 00:58:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:29.875 ************************************ 00:33:29.875 END TEST nvmf_discovery_remove_ifc 00:33:29.875 ************************************ 00:33:29.875 00:58:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:29.875 00:58:40 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:29.875 00:58:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:29.875 00:58:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:29.875 00:58:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:29.875 ************************************ 00:33:29.875 START TEST nvmf_identify_kernel_target 00:33:29.875 ************************************ 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:29.875 * Looking for test storage... 00:33:29.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.875 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:33:29.876 00:58:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:33:35.160 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:35.161 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:35.161 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:35.161 Found net devices under 0000:86:00.0: cvl_0_0 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:35.161 Found net devices under 0000:86:00.1: cvl_0_1 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:35.161 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:35.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:35.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:33:35.420 00:33:35.420 --- 10.0.0.2 ping statistics --- 00:33:35.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.420 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:35.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:35.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:33:35.420 00:33:35.420 --- 10.0.0.1 ping statistics --- 00:33:35.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.420 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:35.420 00:58:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:38.707 Waiting for block devices as requested 00:33:38.707 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:38.707 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:38.707 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:38.707 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:38.707 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:38.707 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:38.707 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:38.707 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:38.707 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:38.965 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:38.965 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:38.965 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:39.224 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:39.224 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:39.224 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:39.483 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:39.483 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:39.483 00:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:39.483 00:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:39.483 00:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:39.483 00:58:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:33:39.483 00:58:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:39.483 00:58:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:39.483 00:58:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:39.483 00:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:39.483 00:58:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:39.483 No valid GPT data, bailing 00:33:39.483 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:39.483 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:33:39.483 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:33:39.483 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:39.483 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:39.483 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:39.483 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:39.742 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:39.742 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:39.742 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:33:39.742 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:39.742 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:33:39.742 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:39.742 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:33:39.742 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:33:39.742 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:33:39.742 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:39.742 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:39.742 00:33:39.742 Discovery Log Number of Records 2, Generation counter 2 00:33:39.742 =====Discovery Log Entry 0====== 00:33:39.742 trtype: tcp 00:33:39.742 adrfam: ipv4 00:33:39.742 subtype: current discovery subsystem 00:33:39.742 treq: not specified, sq flow control disable supported 00:33:39.742 portid: 1 00:33:39.742 trsvcid: 4420 00:33:39.742 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:39.742 traddr: 10.0.0.1 00:33:39.742 eflags: none 00:33:39.742 sectype: none 00:33:39.742 =====Discovery Log Entry 1====== 00:33:39.742 trtype: tcp 00:33:39.742 adrfam: ipv4 00:33:39.742 subtype: nvme subsystem 00:33:39.742 treq: not specified, sq flow control disable supported 00:33:39.742 portid: 1 00:33:39.742 trsvcid: 4420 00:33:39.742 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:39.742 traddr: 10.0.0.1 00:33:39.742 eflags: none 00:33:39.742 sectype: none 00:33:39.742 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:39.742 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:39.742 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.742 ===================================================== 00:33:39.742 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:39.742 ===================================================== 00:33:39.742 Controller Capabilities/Features 00:33:39.742 ================================ 00:33:39.742 Vendor ID: 0000 00:33:39.742 Subsystem Vendor ID: 0000 00:33:39.742 Serial Number: 8b925c44319006a1e5dd 00:33:39.742 Model Number: Linux 00:33:39.742 Firmware Version: 6.7.0-68 00:33:39.742 Recommended Arb Burst: 0 00:33:39.742 IEEE OUI Identifier: 00 00 00 00:33:39.742 Multi-path I/O 00:33:39.742 May have multiple subsystem ports: No 00:33:39.742 May have multiple controllers: No 00:33:39.742 Associated with SR-IOV VF: No 00:33:39.742 Max Data Transfer Size: Unlimited 00:33:39.742 Max Number of Namespaces: 0 00:33:39.742 Max Number of I/O Queues: 1024 00:33:39.742 NVMe Specification Version (VS): 1.3 00:33:39.742 NVMe Specification Version (Identify): 1.3 00:33:39.742 Maximum Queue Entries: 1024 00:33:39.742 Contiguous Queues Required: No 00:33:39.742 Arbitration Mechanisms Supported 00:33:39.742 Weighted Round Robin: Not Supported 00:33:39.742 Vendor Specific: Not Supported 00:33:39.742 Reset Timeout: 7500 ms 00:33:39.742 Doorbell Stride: 4 bytes 00:33:39.742 NVM Subsystem Reset: Not Supported 00:33:39.742 Command Sets Supported 00:33:39.742 NVM Command Set: Supported 00:33:39.742 Boot Partition: Not Supported 00:33:39.742 Memory Page Size Minimum: 4096 bytes 00:33:39.742 Memory Page Size Maximum: 4096 bytes 00:33:39.742 Persistent Memory Region: Not Supported 00:33:39.742 Optional Asynchronous Events Supported 00:33:39.742 Namespace Attribute Notices: Not Supported 00:33:39.742 Firmware Activation Notices: Not Supported 00:33:39.742 ANA Change Notices: Not Supported 00:33:39.742 PLE Aggregate Log Change Notices: Not Supported 00:33:39.742 LBA Status Info Alert Notices: Not Supported 00:33:39.742 EGE Aggregate Log Change Notices: Not Supported 00:33:39.742 Normal NVM Subsystem Shutdown event: Not Supported 00:33:39.742 Zone Descriptor Change Notices: Not Supported 00:33:39.742 Discovery Log Change Notices: Supported 00:33:39.742 Controller Attributes 00:33:39.742 128-bit Host Identifier: Not Supported 00:33:39.742 Non-Operational Permissive Mode: Not Supported 00:33:39.742 NVM Sets: Not Supported 00:33:39.742 Read Recovery Levels: Not Supported 00:33:39.742 Endurance Groups: Not Supported 00:33:39.742 Predictable Latency Mode: Not Supported 00:33:39.742 Traffic Based Keep ALive: Not Supported 00:33:39.743 Namespace Granularity: Not Supported 00:33:39.743 SQ Associations: Not Supported 00:33:39.743 UUID List: Not Supported 00:33:39.743 Multi-Domain Subsystem: Not Supported 00:33:39.743 Fixed Capacity Management: Not Supported 00:33:39.743 Variable Capacity Management: Not Supported 00:33:39.743 Delete Endurance Group: Not Supported 00:33:39.743 Delete NVM Set: Not Supported 00:33:39.743 Extended LBA Formats Supported: Not Supported 00:33:39.743 Flexible Data Placement Supported: Not Supported 00:33:39.743 00:33:39.743 Controller Memory Buffer Support 00:33:39.743 ================================ 00:33:39.743 Supported: No 00:33:39.743 00:33:39.743 Persistent Memory Region Support 00:33:39.743 ================================ 00:33:39.743 Supported: No 00:33:39.743 00:33:39.743 Admin Command Set Attributes 00:33:39.743 ============================ 00:33:39.743 Security Send/Receive: Not Supported 00:33:39.743 Format NVM: Not Supported 00:33:39.743 Firmware Activate/Download: Not Supported 00:33:39.743 Namespace Management: Not Supported 00:33:39.743 Device Self-Test: Not Supported 00:33:39.743 Directives: Not Supported 00:33:39.743 NVMe-MI: Not Supported 00:33:39.743 Virtualization Management: Not Supported 00:33:39.743 Doorbell Buffer Config: Not Supported 00:33:39.743 Get LBA Status Capability: Not Supported 00:33:39.743 Command & Feature Lockdown Capability: Not Supported 00:33:39.743 Abort Command Limit: 1 00:33:39.743 Async Event Request Limit: 1 00:33:39.743 Number of Firmware Slots: N/A 00:33:39.743 Firmware Slot 1 Read-Only: N/A 00:33:39.743 Firmware Activation Without Reset: N/A 00:33:39.743 Multiple Update Detection Support: N/A 00:33:39.743 Firmware Update Granularity: No Information Provided 00:33:39.743 Per-Namespace SMART Log: No 00:33:39.743 Asymmetric Namespace Access Log Page: Not Supported 00:33:39.743 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:39.743 Command Effects Log Page: Not Supported 00:33:39.743 Get Log Page Extended Data: Supported 00:33:39.743 Telemetry Log Pages: Not Supported 00:33:39.743 Persistent Event Log Pages: Not Supported 00:33:39.743 Supported Log Pages Log Page: May Support 00:33:39.743 Commands Supported & Effects Log Page: Not Supported 00:33:39.743 Feature Identifiers & Effects Log Page:May Support 00:33:39.743 NVMe-MI Commands & Effects Log Page: May Support 00:33:39.743 Data Area 4 for Telemetry Log: Not Supported 00:33:39.743 Error Log Page Entries Supported: 1 00:33:39.743 Keep Alive: Not Supported 00:33:39.743 00:33:39.743 NVM Command Set Attributes 00:33:39.743 ========================== 00:33:39.743 Submission Queue Entry Size 00:33:39.743 Max: 1 00:33:39.743 Min: 1 00:33:39.743 Completion Queue Entry Size 00:33:39.743 Max: 1 00:33:39.743 Min: 1 00:33:39.743 Number of Namespaces: 0 00:33:39.743 Compare Command: Not Supported 00:33:39.743 Write Uncorrectable Command: Not Supported 00:33:39.743 Dataset Management Command: Not Supported 00:33:39.743 Write Zeroes Command: Not Supported 00:33:39.743 Set Features Save Field: Not Supported 00:33:39.743 Reservations: Not Supported 00:33:39.743 Timestamp: Not Supported 00:33:39.743 Copy: Not Supported 00:33:39.743 Volatile Write Cache: Not Present 00:33:39.743 Atomic Write Unit (Normal): 1 00:33:39.743 Atomic Write Unit (PFail): 1 00:33:39.743 Atomic Compare & Write Unit: 1 00:33:39.743 Fused Compare & Write: Not Supported 00:33:39.743 Scatter-Gather List 00:33:39.743 SGL Command Set: Supported 00:33:39.743 SGL Keyed: Not Supported 00:33:39.743 SGL Bit Bucket Descriptor: Not Supported 00:33:39.743 SGL Metadata Pointer: Not Supported 00:33:39.743 Oversized SGL: Not Supported 00:33:39.743 SGL Metadata Address: Not Supported 00:33:39.743 SGL Offset: Supported 00:33:39.743 Transport SGL Data Block: Not Supported 00:33:39.743 Replay Protected Memory Block: Not Supported 00:33:39.743 00:33:39.743 Firmware Slot Information 00:33:39.743 ========================= 00:33:39.743 Active slot: 0 00:33:39.743 00:33:39.743 00:33:39.743 Error Log 00:33:39.743 ========= 00:33:39.743 00:33:39.743 Active Namespaces 00:33:39.743 ================= 00:33:39.743 Discovery Log Page 00:33:39.743 ================== 00:33:39.743 Generation Counter: 2 00:33:39.743 Number of Records: 2 00:33:39.743 Record Format: 0 00:33:39.743 00:33:39.743 Discovery Log Entry 0 00:33:39.743 ---------------------- 00:33:39.743 Transport Type: 3 (TCP) 00:33:39.743 Address Family: 1 (IPv4) 00:33:39.743 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:39.743 Entry Flags: 00:33:39.743 Duplicate Returned Information: 0 00:33:39.743 Explicit Persistent Connection Support for Discovery: 0 00:33:39.743 Transport Requirements: 00:33:39.743 Secure Channel: Not Specified 00:33:39.743 Port ID: 1 (0x0001) 00:33:39.743 Controller ID: 65535 (0xffff) 00:33:39.743 Admin Max SQ Size: 32 00:33:39.743 Transport Service Identifier: 4420 00:33:39.743 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:39.743 Transport Address: 10.0.0.1 00:33:39.743 Discovery Log Entry 1 00:33:39.743 ---------------------- 00:33:39.743 Transport Type: 3 (TCP) 00:33:39.743 Address Family: 1 (IPv4) 00:33:39.743 Subsystem Type: 2 (NVM Subsystem) 00:33:39.743 Entry Flags: 00:33:39.743 Duplicate Returned Information: 0 00:33:39.743 Explicit Persistent Connection Support for Discovery: 0 00:33:39.743 Transport Requirements: 00:33:39.743 Secure Channel: Not Specified 00:33:39.743 Port ID: 1 (0x0001) 00:33:39.743 Controller ID: 65535 (0xffff) 00:33:39.743 Admin Max SQ Size: 32 00:33:39.743 Transport Service Identifier: 4420 00:33:39.743 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:39.743 Transport Address: 10.0.0.1 00:33:39.743 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:39.743 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.002 get_feature(0x01) failed 00:33:40.002 get_feature(0x02) failed 00:33:40.002 get_feature(0x04) failed 00:33:40.002 ===================================================== 00:33:40.002 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:40.002 ===================================================== 00:33:40.002 Controller Capabilities/Features 00:33:40.002 ================================ 00:33:40.002 Vendor ID: 0000 00:33:40.002 Subsystem Vendor ID: 0000 00:33:40.002 Serial Number: fd88a28588194beabb52 00:33:40.002 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:40.002 Firmware Version: 6.7.0-68 00:33:40.002 Recommended Arb Burst: 6 00:33:40.002 IEEE OUI Identifier: 00 00 00 00:33:40.002 Multi-path I/O 00:33:40.002 May have multiple subsystem ports: Yes 00:33:40.002 May have multiple controllers: Yes 00:33:40.002 Associated with SR-IOV VF: No 00:33:40.002 Max Data Transfer Size: Unlimited 00:33:40.002 Max Number of Namespaces: 1024 00:33:40.002 Max Number of I/O Queues: 128 00:33:40.002 NVMe Specification Version (VS): 1.3 00:33:40.002 NVMe Specification Version (Identify): 1.3 00:33:40.002 Maximum Queue Entries: 1024 00:33:40.002 Contiguous Queues Required: No 00:33:40.002 Arbitration Mechanisms Supported 00:33:40.002 Weighted Round Robin: Not Supported 00:33:40.002 Vendor Specific: Not Supported 00:33:40.002 Reset Timeout: 7500 ms 00:33:40.002 Doorbell Stride: 4 bytes 00:33:40.002 NVM Subsystem Reset: Not Supported 00:33:40.002 Command Sets Supported 00:33:40.002 NVM Command Set: Supported 00:33:40.002 Boot Partition: Not Supported 00:33:40.002 Memory Page Size Minimum: 4096 bytes 00:33:40.002 Memory Page Size Maximum: 4096 bytes 00:33:40.002 Persistent Memory Region: Not Supported 00:33:40.002 Optional Asynchronous Events Supported 00:33:40.002 Namespace Attribute Notices: Supported 00:33:40.002 Firmware Activation Notices: Not Supported 00:33:40.002 ANA Change Notices: Supported 00:33:40.002 PLE Aggregate Log Change Notices: Not Supported 00:33:40.002 LBA Status Info Alert Notices: Not Supported 00:33:40.002 EGE Aggregate Log Change Notices: Not Supported 00:33:40.002 Normal NVM Subsystem Shutdown event: Not Supported 00:33:40.002 Zone Descriptor Change Notices: Not Supported 00:33:40.002 Discovery Log Change Notices: Not Supported 00:33:40.002 Controller Attributes 00:33:40.002 128-bit Host Identifier: Supported 00:33:40.002 Non-Operational Permissive Mode: Not Supported 00:33:40.002 NVM Sets: Not Supported 00:33:40.002 Read Recovery Levels: Not Supported 00:33:40.002 Endurance Groups: Not Supported 00:33:40.002 Predictable Latency Mode: Not Supported 00:33:40.002 Traffic Based Keep ALive: Supported 00:33:40.002 Namespace Granularity: Not Supported 00:33:40.002 SQ Associations: Not Supported 00:33:40.002 UUID List: Not Supported 00:33:40.002 Multi-Domain Subsystem: Not Supported 00:33:40.002 Fixed Capacity Management: Not Supported 00:33:40.002 Variable Capacity Management: Not Supported 00:33:40.002 Delete Endurance Group: Not Supported 00:33:40.002 Delete NVM Set: Not Supported 00:33:40.002 Extended LBA Formats Supported: Not Supported 00:33:40.002 Flexible Data Placement Supported: Not Supported 00:33:40.002 00:33:40.002 Controller Memory Buffer Support 00:33:40.002 ================================ 00:33:40.002 Supported: No 00:33:40.002 00:33:40.002 Persistent Memory Region Support 00:33:40.002 ================================ 00:33:40.002 Supported: No 00:33:40.002 00:33:40.002 Admin Command Set Attributes 00:33:40.002 ============================ 00:33:40.002 Security Send/Receive: Not Supported 00:33:40.002 Format NVM: Not Supported 00:33:40.002 Firmware Activate/Download: Not Supported 00:33:40.002 Namespace Management: Not Supported 00:33:40.002 Device Self-Test: Not Supported 00:33:40.002 Directives: Not Supported 00:33:40.002 NVMe-MI: Not Supported 00:33:40.003 Virtualization Management: Not Supported 00:33:40.003 Doorbell Buffer Config: Not Supported 00:33:40.003 Get LBA Status Capability: Not Supported 00:33:40.003 Command & Feature Lockdown Capability: Not Supported 00:33:40.003 Abort Command Limit: 4 00:33:40.003 Async Event Request Limit: 4 00:33:40.003 Number of Firmware Slots: N/A 00:33:40.003 Firmware Slot 1 Read-Only: N/A 00:33:40.003 Firmware Activation Without Reset: N/A 00:33:40.003 Multiple Update Detection Support: N/A 00:33:40.003 Firmware Update Granularity: No Information Provided 00:33:40.003 Per-Namespace SMART Log: Yes 00:33:40.003 Asymmetric Namespace Access Log Page: Supported 00:33:40.003 ANA Transition Time : 10 sec 00:33:40.003 00:33:40.003 Asymmetric Namespace Access Capabilities 00:33:40.003 ANA Optimized State : Supported 00:33:40.003 ANA Non-Optimized State : Supported 00:33:40.003 ANA Inaccessible State : Supported 00:33:40.003 ANA Persistent Loss State : Supported 00:33:40.003 ANA Change State : Supported 00:33:40.003 ANAGRPID is not changed : No 00:33:40.003 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:40.003 00:33:40.003 ANA Group Identifier Maximum : 128 00:33:40.003 Number of ANA Group Identifiers : 128 00:33:40.003 Max Number of Allowed Namespaces : 1024 00:33:40.003 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:40.003 Command Effects Log Page: Supported 00:33:40.003 Get Log Page Extended Data: Supported 00:33:40.003 Telemetry Log Pages: Not Supported 00:33:40.003 Persistent Event Log Pages: Not Supported 00:33:40.003 Supported Log Pages Log Page: May Support 00:33:40.003 Commands Supported & Effects Log Page: Not Supported 00:33:40.003 Feature Identifiers & Effects Log Page:May Support 00:33:40.003 NVMe-MI Commands & Effects Log Page: May Support 00:33:40.003 Data Area 4 for Telemetry Log: Not Supported 00:33:40.003 Error Log Page Entries Supported: 128 00:33:40.003 Keep Alive: Supported 00:33:40.003 Keep Alive Granularity: 1000 ms 00:33:40.003 00:33:40.003 NVM Command Set Attributes 00:33:40.003 ========================== 00:33:40.003 Submission Queue Entry Size 00:33:40.003 Max: 64 00:33:40.003 Min: 64 00:33:40.003 Completion Queue Entry Size 00:33:40.003 Max: 16 00:33:40.003 Min: 16 00:33:40.003 Number of Namespaces: 1024 00:33:40.003 Compare Command: Not Supported 00:33:40.003 Write Uncorrectable Command: Not Supported 00:33:40.003 Dataset Management Command: Supported 00:33:40.003 Write Zeroes Command: Supported 00:33:40.003 Set Features Save Field: Not Supported 00:33:40.003 Reservations: Not Supported 00:33:40.003 Timestamp: Not Supported 00:33:40.003 Copy: Not Supported 00:33:40.003 Volatile Write Cache: Present 00:33:40.003 Atomic Write Unit (Normal): 1 00:33:40.003 Atomic Write Unit (PFail): 1 00:33:40.003 Atomic Compare & Write Unit: 1 00:33:40.003 Fused Compare & Write: Not Supported 00:33:40.003 Scatter-Gather List 00:33:40.003 SGL Command Set: Supported 00:33:40.003 SGL Keyed: Not Supported 00:33:40.003 SGL Bit Bucket Descriptor: Not Supported 00:33:40.003 SGL Metadata Pointer: Not Supported 00:33:40.003 Oversized SGL: Not Supported 00:33:40.003 SGL Metadata Address: Not Supported 00:33:40.003 SGL Offset: Supported 00:33:40.003 Transport SGL Data Block: Not Supported 00:33:40.003 Replay Protected Memory Block: Not Supported 00:33:40.003 00:33:40.003 Firmware Slot Information 00:33:40.003 ========================= 00:33:40.003 Active slot: 0 00:33:40.003 00:33:40.003 Asymmetric Namespace Access 00:33:40.003 =========================== 00:33:40.003 Change Count : 0 00:33:40.003 Number of ANA Group Descriptors : 1 00:33:40.003 ANA Group Descriptor : 0 00:33:40.003 ANA Group ID : 1 00:33:40.003 Number of NSID Values : 1 00:33:40.003 Change Count : 0 00:33:40.003 ANA State : 1 00:33:40.003 Namespace Identifier : 1 00:33:40.003 00:33:40.003 Commands Supported and Effects 00:33:40.003 ============================== 00:33:40.003 Admin Commands 00:33:40.003 -------------- 00:33:40.003 Get Log Page (02h): Supported 00:33:40.003 Identify (06h): Supported 00:33:40.003 Abort (08h): Supported 00:33:40.003 Set Features (09h): Supported 00:33:40.003 Get Features (0Ah): Supported 00:33:40.003 Asynchronous Event Request (0Ch): Supported 00:33:40.003 Keep Alive (18h): Supported 00:33:40.003 I/O Commands 00:33:40.003 ------------ 00:33:40.003 Flush (00h): Supported 00:33:40.003 Write (01h): Supported LBA-Change 00:33:40.003 Read (02h): Supported 00:33:40.003 Write Zeroes (08h): Supported LBA-Change 00:33:40.003 Dataset Management (09h): Supported 00:33:40.003 00:33:40.003 Error Log 00:33:40.003 ========= 00:33:40.003 Entry: 0 00:33:40.003 Error Count: 0x3 00:33:40.003 Submission Queue Id: 0x0 00:33:40.003 Command Id: 0x5 00:33:40.003 Phase Bit: 0 00:33:40.003 Status Code: 0x2 00:33:40.003 Status Code Type: 0x0 00:33:40.003 Do Not Retry: 1 00:33:40.003 Error Location: 0x28 00:33:40.003 LBA: 0x0 00:33:40.003 Namespace: 0x0 00:33:40.003 Vendor Log Page: 0x0 00:33:40.003 ----------- 00:33:40.003 Entry: 1 00:33:40.003 Error Count: 0x2 00:33:40.003 Submission Queue Id: 0x0 00:33:40.003 Command Id: 0x5 00:33:40.003 Phase Bit: 0 00:33:40.003 Status Code: 0x2 00:33:40.003 Status Code Type: 0x0 00:33:40.003 Do Not Retry: 1 00:33:40.003 Error Location: 0x28 00:33:40.003 LBA: 0x0 00:33:40.003 Namespace: 0x0 00:33:40.003 Vendor Log Page: 0x0 00:33:40.003 ----------- 00:33:40.003 Entry: 2 00:33:40.003 Error Count: 0x1 00:33:40.003 Submission Queue Id: 0x0 00:33:40.003 Command Id: 0x4 00:33:40.003 Phase Bit: 0 00:33:40.003 Status Code: 0x2 00:33:40.003 Status Code Type: 0x0 00:33:40.003 Do Not Retry: 1 00:33:40.003 Error Location: 0x28 00:33:40.003 LBA: 0x0 00:33:40.003 Namespace: 0x0 00:33:40.003 Vendor Log Page: 0x0 00:33:40.003 00:33:40.003 Number of Queues 00:33:40.003 ================ 00:33:40.003 Number of I/O Submission Queues: 128 00:33:40.003 Number of I/O Completion Queues: 128 00:33:40.003 00:33:40.003 ZNS Specific Controller Data 00:33:40.003 ============================ 00:33:40.003 Zone Append Size Limit: 0 00:33:40.003 00:33:40.003 00:33:40.003 Active Namespaces 00:33:40.003 ================= 00:33:40.003 get_feature(0x05) failed 00:33:40.003 Namespace ID:1 00:33:40.003 Command Set Identifier: NVM (00h) 00:33:40.003 Deallocate: Supported 00:33:40.003 Deallocated/Unwritten Error: Not Supported 00:33:40.003 Deallocated Read Value: Unknown 00:33:40.003 Deallocate in Write Zeroes: Not Supported 00:33:40.003 Deallocated Guard Field: 0xFFFF 00:33:40.003 Flush: Supported 00:33:40.003 Reservation: Not Supported 00:33:40.003 Namespace Sharing Capabilities: Multiple Controllers 00:33:40.003 Size (in LBAs): 1953525168 (931GiB) 00:33:40.003 Capacity (in LBAs): 1953525168 (931GiB) 00:33:40.003 Utilization (in LBAs): 1953525168 (931GiB) 00:33:40.003 UUID: 6f766132-12fb-4931-a7c9-81432b2f6f90 00:33:40.003 Thin Provisioning: Not Supported 00:33:40.003 Per-NS Atomic Units: Yes 00:33:40.003 Atomic Boundary Size (Normal): 0 00:33:40.003 Atomic Boundary Size (PFail): 0 00:33:40.003 Atomic Boundary Offset: 0 00:33:40.003 NGUID/EUI64 Never Reused: No 00:33:40.003 ANA group ID: 1 00:33:40.003 Namespace Write Protected: No 00:33:40.003 Number of LBA Formats: 1 00:33:40.003 Current LBA Format: LBA Format #00 00:33:40.003 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:40.003 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:40.003 rmmod nvme_tcp 00:33:40.003 rmmod nvme_fabrics 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:40.003 00:58:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.908 00:58:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:41.908 00:58:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:41.908 00:58:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:41.908 00:58:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:33:41.908 00:58:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:41.908 00:58:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:41.908 00:58:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:41.908 00:58:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:41.908 00:58:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:41.908 00:58:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:42.167 00:58:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:44.699 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:44.699 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:44.699 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:44.699 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:44.699 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:44.699 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:44.957 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:44.957 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:44.957 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:44.957 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:44.957 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:44.957 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:44.957 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:44.957 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:44.957 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:44.957 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:45.895 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:45.895 00:33:45.895 real 0m16.242s 00:33:45.895 user 0m4.052s 00:33:45.895 sys 0m8.509s 00:33:45.895 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:45.895 00:58:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:45.895 ************************************ 00:33:45.895 END TEST nvmf_identify_kernel_target 00:33:45.895 ************************************ 00:33:45.895 00:58:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:45.895 00:58:57 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:45.895 00:58:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:45.895 00:58:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:45.895 00:58:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.895 ************************************ 00:33:45.895 START TEST nvmf_auth_host 00:33:45.895 ************************************ 00:33:45.895 00:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:45.895 * Looking for test storage... 00:33:45.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:45.895 00:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:45.895 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:45.895 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.895 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.895 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.895 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.895 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.895 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.895 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.895 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.895 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.895 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:33:46.156 00:58:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:51.433 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:51.433 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:51.433 Found net devices under 0000:86:00.0: cvl_0_0 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:51.433 Found net devices under 0000:86:00.1: cvl_0_1 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:51.433 00:59:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:51.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:51.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:33:51.691 00:33:51.691 --- 10.0.0.2 ping statistics --- 00:33:51.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.691 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:51.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:51.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:33:51.691 00:33:51.691 --- 10.0.0.1 ping statistics --- 00:33:51.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.691 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1592028 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1592028 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1592028 ']' 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:51.691 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b7635839aa0e3e7cf01a17647236710c 00:33:51.949 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.41W 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b7635839aa0e3e7cf01a17647236710c 0 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b7635839aa0e3e7cf01a17647236710c 0 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b7635839aa0e3e7cf01a17647236710c 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.41W 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.41W 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.41W 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=03bd421c0676ae3fde5231976d33b3d139b9eac453323d86e67e968e96dc23d3 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.sNl 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 03bd421c0676ae3fde5231976d33b3d139b9eac453323d86e67e968e96dc23d3 3 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 03bd421c0676ae3fde5231976d33b3d139b9eac453323d86e67e968e96dc23d3 3 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=03bd421c0676ae3fde5231976d33b3d139b9eac453323d86e67e968e96dc23d3 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.sNl 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.sNl 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.sNl 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7246fadb219b1d3e1bd34544045fd9aef577c6f811385071 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.dNG 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7246fadb219b1d3e1bd34544045fd9aef577c6f811385071 0 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7246fadb219b1d3e1bd34544045fd9aef577c6f811385071 0 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7246fadb219b1d3e1bd34544045fd9aef577c6f811385071 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.dNG 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.dNG 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.dNG 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1a27eeb4aa7adeb4b199c8411ed9269b43289cbf533b9183 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.tnA 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1a27eeb4aa7adeb4b199c8411ed9269b43289cbf533b9183 2 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1a27eeb4aa7adeb4b199c8411ed9269b43289cbf533b9183 2 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1a27eeb4aa7adeb4b199c8411ed9269b43289cbf533b9183 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.tnA 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.tnA 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.tnA 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6344211b19a76c6911958a3f7cc69e29 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Ruf 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6344211b19a76c6911958a3f7cc69e29 1 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6344211b19a76c6911958a3f7cc69e29 1 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6344211b19a76c6911958a3f7cc69e29 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:52.208 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Ruf 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Ruf 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Ruf 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a8dcb92e30617fcdd9e9aa94f1790b41 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.o5r 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a8dcb92e30617fcdd9e9aa94f1790b41 1 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a8dcb92e30617fcdd9e9aa94f1790b41 1 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a8dcb92e30617fcdd9e9aa94f1790b41 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.o5r 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.o5r 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.o5r 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f0c1b6c2f22759ebc7326384a006f8b42ee159dd13afc463 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.aSI 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f0c1b6c2f22759ebc7326384a006f8b42ee159dd13afc463 2 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f0c1b6c2f22759ebc7326384a006f8b42ee159dd13afc463 2 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f0c1b6c2f22759ebc7326384a006f8b42ee159dd13afc463 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.aSI 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.aSI 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.aSI 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b188fbc072b60fa2aa45524e21125329 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OHz 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b188fbc072b60fa2aa45524e21125329 0 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b188fbc072b60fa2aa45524e21125329 0 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b188fbc072b60fa2aa45524e21125329 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OHz 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OHz 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.OHz 00:33:52.468 00:59:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6c41e14050d06f6642ca6b943510b2ab147a874e6c2e62cf1326326690bc5c34 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.M2G 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6c41e14050d06f6642ca6b943510b2ab147a874e6c2e62cf1326326690bc5c34 3 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6c41e14050d06f6642ca6b943510b2ab147a874e6c2e62cf1326326690bc5c34 3 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6c41e14050d06f6642ca6b943510b2ab147a874e6c2e62cf1326326690bc5c34 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:52.469 00:59:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:52.469 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.M2G 00:33:52.469 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.M2G 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.M2G 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1592028 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1592028 ']' 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.41W 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.sNl ]] 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sNl 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:52.728 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.dNG 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.tnA ]] 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tnA 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Ruf 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.o5r ]] 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o5r 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.aSI 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.OHz ]] 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.OHz 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.M2G 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:52.729 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:52.988 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:52.988 00:59:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:55.523 Waiting for block devices as requested 00:33:55.523 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:55.523 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:55.782 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:55.782 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:55.782 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:56.048 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:56.048 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:56.048 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:56.048 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:56.370 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:56.370 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:56.370 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:56.370 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:56.370 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:56.628 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:56.628 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:56.628 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:57.191 No valid GPT data, bailing 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:57.191 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:57.450 00:33:57.450 Discovery Log Number of Records 2, Generation counter 2 00:33:57.450 =====Discovery Log Entry 0====== 00:33:57.450 trtype: tcp 00:33:57.450 adrfam: ipv4 00:33:57.450 subtype: current discovery subsystem 00:33:57.450 treq: not specified, sq flow control disable supported 00:33:57.450 portid: 1 00:33:57.450 trsvcid: 4420 00:33:57.450 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:57.450 traddr: 10.0.0.1 00:33:57.450 eflags: none 00:33:57.450 sectype: none 00:33:57.450 =====Discovery Log Entry 1====== 00:33:57.450 trtype: tcp 00:33:57.450 adrfam: ipv4 00:33:57.450 subtype: nvme subsystem 00:33:57.450 treq: not specified, sq flow control disable supported 00:33:57.450 portid: 1 00:33:57.450 trsvcid: 4420 00:33:57.450 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:57.450 traddr: 10.0.0.1 00:33:57.450 eflags: none 00:33:57.450 sectype: none 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.450 00:59:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.708 nvme0n1 00:33:57.708 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.708 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.708 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.708 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.708 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.708 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.708 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.708 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: ]] 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.709 nvme0n1 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.709 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.967 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.967 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.967 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.967 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.967 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.967 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.967 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:57.967 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.967 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.967 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:57.967 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:57.967 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:33:57.967 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.968 nvme0n1 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.968 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.227 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:58.227 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.227 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.227 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:58.227 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:58.227 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:33:58.227 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:33:58.227 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.227 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: ]] 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.228 nvme0n1 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: ]] 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.228 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.487 nvme0n1 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:58.487 00:59:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:58.487 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:58.487 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:58.487 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.487 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.746 nvme0n1 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: ]] 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.746 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.004 nvme0n1 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.004 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.262 nvme0n1 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: ]] 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.262 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.521 nvme0n1 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: ]] 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.521 00:59:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.780 nvme0n1 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.780 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.039 nvme0n1 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.039 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: ]] 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.040 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.299 nvme0n1 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.299 00:59:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.558 nvme0n1 00:34:00.558 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.558 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.558 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.558 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.558 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.558 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.558 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.558 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.558 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.558 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: ]] 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.817 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.076 nvme0n1 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: ]] 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:01.076 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:01.077 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:01.077 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.077 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.335 nvme0n1 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.335 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.336 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.336 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.336 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:01.336 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:01.336 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:01.336 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.336 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.336 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:01.336 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.336 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:01.336 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:01.336 00:59:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:01.336 00:59:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:01.336 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.336 00:59:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.594 nvme0n1 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: ]] 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.594 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.160 nvme0n1 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.160 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.418 nvme0n1 00:34:02.418 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.418 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.418 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.418 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.419 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.419 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.419 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.419 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.419 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.419 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: ]] 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:02.677 00:59:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.677 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:02.677 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:02.677 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:02.677 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:02.677 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.677 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.936 nvme0n1 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:02.936 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: ]] 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.937 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.504 nvme0n1 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:03.504 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.505 00:59:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.763 nvme0n1 00:34:03.763 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.763 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.763 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.763 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.763 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.763 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: ]] 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.021 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.588 nvme0n1 00:34:04.588 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.588 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.588 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.588 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.588 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.588 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.588 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.588 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.588 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.588 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.588 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.588 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.589 00:59:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.589 00:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.589 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.589 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.589 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.589 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.589 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.589 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.589 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:04.589 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.589 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:04.589 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:04.589 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:04.589 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:04.589 00:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.589 00:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.157 nvme0n1 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: ]] 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.157 00:59:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.724 nvme0n1 00:34:05.724 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.724 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.724 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.724 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.724 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.724 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.982 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: ]] 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.983 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.549 nvme0n1 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.549 00:59:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.117 nvme0n1 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: ]] 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.117 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.375 nvme0n1 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.375 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.376 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.376 00:59:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:07.376 00:59:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.376 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.376 00:59:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.635 nvme0n1 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: ]] 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.635 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.895 nvme0n1 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: ]] 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.895 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.896 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.896 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.896 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.896 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.896 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:07.896 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:07.896 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.896 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.155 nvme0n1 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.155 nvme0n1 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.155 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: ]] 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.414 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.415 nvme0n1 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.415 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.673 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.673 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.673 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:08.673 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.673 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.674 00:59:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.674 nvme0n1 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.674 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: ]] 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.933 nvme0n1 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.933 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.934 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.934 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.934 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: ]] 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.193 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.194 nvme0n1 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:09.194 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.453 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.453 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:09.453 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:09.453 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:09.453 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:09.453 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.454 nvme0n1 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.454 00:59:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: ]] 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.454 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.714 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.714 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.714 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.714 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.714 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.714 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.714 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.714 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.714 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.714 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.714 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.714 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.714 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:09.714 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.714 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.973 nvme0n1 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.973 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.232 nvme0n1 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: ]] 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:10.232 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.233 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.492 nvme0n1 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: ]] 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.492 00:59:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.752 nvme0n1 00:34:10.752 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.752 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.752 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.752 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.752 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.752 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.752 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.752 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.752 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.752 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.752 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.752 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.752 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.753 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.012 nvme0n1 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.012 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: ]] 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.272 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.531 nvme0n1 00:34:11.531 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.531 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.531 00:59:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.531 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.531 00:59:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.531 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.140 nvme0n1 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: ]] 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.140 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.415 nvme0n1 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: ]] 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.415 00:59:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.983 nvme0n1 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.983 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.551 nvme0n1 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: ]] 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.551 00:59:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.118 nvme0n1 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.118 00:59:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.686 nvme0n1 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: ]] 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.686 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.253 nvme0n1 00:34:15.253 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.253 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.253 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.253 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.253 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: ]] 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.512 00:59:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.081 nvme0n1 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.081 00:59:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.649 nvme0n1 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: ]] 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.649 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.909 nvme0n1 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.909 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.169 nvme0n1 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: ]] 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.169 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.429 nvme0n1 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: ]] 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.429 00:59:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.689 nvme0n1 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.689 nvme0n1 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.689 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: ]] 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:17.948 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.949 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:17.949 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:17.949 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:17.949 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:17.949 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.949 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.949 nvme0n1 00:34:17.949 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.949 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.949 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.949 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.949 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.949 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.208 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.209 nvme0n1 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.209 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: ]] 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.469 nvme0n1 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.469 00:59:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.469 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.728 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: ]] 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.729 nvme0n1 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.729 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.988 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.989 nvme0n1 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.989 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: ]] 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.249 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.509 nvme0n1 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.509 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.510 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.510 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.510 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.510 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.510 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.510 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:19.510 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.510 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:19.510 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:19.510 00:59:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:19.510 00:59:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:19.510 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.510 00:59:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.770 nvme0n1 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: ]] 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.770 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.030 nvme0n1 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: ]] 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.030 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.290 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:20.290 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.290 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.290 nvme0n1 00:34:20.290 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.290 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.290 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.290 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.290 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.290 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.549 00:59:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.809 nvme0n1 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: ]] 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.809 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.068 nvme0n1 00:34:21.068 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.068 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.068 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.068 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.068 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.069 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.328 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.328 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.328 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.328 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.328 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.328 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.328 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:21.328 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.329 00:59:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.588 nvme0n1 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: ]] 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.588 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.846 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:21.846 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.846 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.104 nvme0n1 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: ]] 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:22.104 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.105 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.672 nvme0n1 00:34:22.672 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.672 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.672 00:59:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.672 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.672 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.672 00:59:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.672 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.931 nvme0n1 00:34:22.931 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.931 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.931 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.931 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.931 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.931 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.931 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.931 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.931 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.931 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.189 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.189 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.189 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.189 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:23.189 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.189 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.189 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:23.189 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.189 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:23.189 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:23.189 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.189 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:23.189 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjc2MzU4MzlhYTBlM2U3Y2YwMWExNzY0NzIzNjcxMGMeLDrC: 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: ]] 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDNiZDQyMWMwNjc2YWUzZmRlNTIzMTk3NmQzM2IzZDEzOWI5ZWFjNDUzMzIzZDg2ZTY3ZTk2OGU5NmRjMjNkM5qFzuQ=: 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.190 00:59:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.758 nvme0n1 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.758 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.759 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:23.759 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.759 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.327 nvme0n1 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjM0NDIxMWIxOWE3NmM2OTExOTU4YTNmN2NjNjllMjl0wSwE: 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: ]] 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YThkY2I5MmUzMDYxN2ZjZGQ5ZTlhYTk0ZjE3OTBiNDFryO+L: 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.327 00:59:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.894 nvme0n1 00:34:24.894 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.894 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.894 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.894 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.894 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.894 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.894 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.894 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.894 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.894 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.894 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.894 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.895 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:24.895 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.895 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjBjMWI2YzJmMjI3NTllYmM3MzI2Mzg0YTAwNmY4YjQyZWUxNTlkZDEzYWZjNDYzuYJLaw==: 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: ]] 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjE4OGZiYzA3MmI2MGZhMmFhNDU1MjRlMjExMjUzMjmAucsi: 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.153 00:59:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.720 nvme0n1 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmM0MWUxNDA1MGQwNmY2NjQyY2E2Yjk0MzUxMGIyYWIxNDdhODc0ZTZjMmU2MmNmMTMyNjMyNjY5MGJjNWMzNOINpqM=: 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.720 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.287 nvme0n1 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI0NmZhZGIyMTliMWQzZTFiZDM0NTQ0MDQ1ZmQ5YWVmNTc3YzZmODExMzg1MDcxv/axcA==: 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: ]] 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEyN2VlYjRhYTdhZGViNGIxOTljODQxMWVkOTI2OWI0MzI4OWNiZjUzM2I5MTgzQMbeAg==: 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.287 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.288 request: 00:34:26.288 { 00:34:26.288 "name": "nvme0", 00:34:26.288 "trtype": "tcp", 00:34:26.288 "traddr": "10.0.0.1", 00:34:26.288 "adrfam": "ipv4", 00:34:26.288 "trsvcid": "4420", 00:34:26.288 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:26.288 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:26.288 "prchk_reftag": false, 00:34:26.288 "prchk_guard": false, 00:34:26.288 "hdgst": false, 00:34:26.288 "ddgst": false, 00:34:26.288 "method": "bdev_nvme_attach_controller", 00:34:26.288 "req_id": 1 00:34:26.288 } 00:34:26.288 Got JSON-RPC error response 00:34:26.288 response: 00:34:26.288 { 00:34:26.288 "code": -5, 00:34:26.288 "message": "Input/output error" 00:34:26.288 } 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.288 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.547 request: 00:34:26.547 { 00:34:26.547 "name": "nvme0", 00:34:26.547 "trtype": "tcp", 00:34:26.547 "traddr": "10.0.0.1", 00:34:26.547 "adrfam": "ipv4", 00:34:26.547 "trsvcid": "4420", 00:34:26.547 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:26.547 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:26.547 "prchk_reftag": false, 00:34:26.547 "prchk_guard": false, 00:34:26.547 "hdgst": false, 00:34:26.547 "ddgst": false, 00:34:26.547 "dhchap_key": "key2", 00:34:26.547 "method": "bdev_nvme_attach_controller", 00:34:26.547 "req_id": 1 00:34:26.547 } 00:34:26.547 Got JSON-RPC error response 00:34:26.547 response: 00:34:26.547 { 00:34:26.547 "code": -5, 00:34:26.547 "message": "Input/output error" 00:34:26.547 } 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.547 00:59:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.547 request: 00:34:26.547 { 00:34:26.547 "name": "nvme0", 00:34:26.547 "trtype": "tcp", 00:34:26.547 "traddr": "10.0.0.1", 00:34:26.547 "adrfam": "ipv4", 00:34:26.547 "trsvcid": "4420", 00:34:26.547 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:26.547 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:26.547 "prchk_reftag": false, 00:34:26.547 "prchk_guard": false, 00:34:26.547 "hdgst": false, 00:34:26.547 "ddgst": false, 00:34:26.547 "dhchap_key": "key1", 00:34:26.547 "dhchap_ctrlr_key": "ckey2", 00:34:26.547 "method": "bdev_nvme_attach_controller", 00:34:26.547 "req_id": 1 00:34:26.547 } 00:34:26.547 Got JSON-RPC error response 00:34:26.547 response: 00:34:26.547 { 00:34:26.547 "code": -5, 00:34:26.547 "message": "Input/output error" 00:34:26.547 } 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:26.547 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:34:26.548 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:26.548 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:26.548 rmmod nvme_tcp 00:34:26.807 rmmod nvme_fabrics 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1592028 ']' 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1592028 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1592028 ']' 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1592028 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1592028 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1592028' 00:34:26.807 killing process with pid 1592028 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1592028 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1592028 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:26.807 00:59:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.342 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:29.342 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:29.342 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:29.342 00:59:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:29.342 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:29.342 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:34:29.342 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:29.342 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:29.342 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:29.342 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:29.342 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:29.342 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:29.342 00:59:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:31.892 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:31.892 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:31.892 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:31.892 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:31.892 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:31.892 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:31.892 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:31.892 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:31.892 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:31.892 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:31.892 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:31.892 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:31.892 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:31.892 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:31.892 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:31.892 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:32.830 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:32.830 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.41W /tmp/spdk.key-null.dNG /tmp/spdk.key-sha256.Ruf /tmp/spdk.key-sha384.aSI /tmp/spdk.key-sha512.M2G /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:32.830 00:59:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:35.365 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:35.365 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:35.365 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:35.365 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:35.365 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:35.365 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:35.365 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:35.365 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:35.365 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:35.365 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:35.365 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:35.365 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:35.365 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:35.365 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:35.365 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:35.365 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:35.365 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:35.624 00:34:35.624 real 0m49.699s 00:34:35.624 user 0m44.218s 00:34:35.625 sys 0m12.340s 00:34:35.625 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:35.625 00:59:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.625 ************************************ 00:34:35.625 END TEST nvmf_auth_host 00:34:35.625 ************************************ 00:34:35.625 00:59:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:35.625 00:59:47 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:34:35.625 00:59:47 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:35.625 00:59:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:35.625 00:59:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:35.625 00:59:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.625 ************************************ 00:34:35.625 START TEST nvmf_digest 00:34:35.625 ************************************ 00:34:35.625 00:59:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:35.884 * Looking for test storage... 00:34:35.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:34:35.884 00:59:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:41.158 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:41.158 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:41.158 Found net devices under 0000:86:00.0: cvl_0_0 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:41.158 Found net devices under 0000:86:00.1: cvl_0_1 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:41.158 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:41.416 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:41.416 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:41.416 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:41.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:41.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:34:41.417 00:34:41.417 --- 10.0.0.2 ping statistics --- 00:34:41.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.417 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:41.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:41.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:34:41.417 00:34:41.417 --- 10.0.0.1 ping statistics --- 00:34:41.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.417 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:41.417 ************************************ 00:34:41.417 START TEST nvmf_digest_clean 00:34:41.417 ************************************ 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1605094 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1605094 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1605094 ']' 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:41.417 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.674 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:41.674 00:59:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:41.674 [2024-07-13 00:59:53.017950] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:41.674 [2024-07-13 00:59:53.017992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:41.674 EAL: No free 2048 kB hugepages reported on node 1 00:34:41.674 [2024-07-13 00:59:53.074194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.674 [2024-07-13 00:59:53.113258] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:41.675 [2024-07-13 00:59:53.113298] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:41.675 [2024-07-13 00:59:53.113306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:41.675 [2024-07-13 00:59:53.113313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:41.675 [2024-07-13 00:59:53.113319] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:41.675 [2024-07-13 00:59:53.113341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.675 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:41.675 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:34:41.675 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:41.675 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:41.675 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:41.675 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:41.675 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:41.675 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:41.675 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:41.675 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.675 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:41.933 null0 00:34:41.933 [2024-07-13 00:59:53.301891] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:41.933 [2024-07-13 00:59:53.326049] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1605296 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1605296 /var/tmp/bperf.sock 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1605296 ']' 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:41.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:41.933 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:41.933 [2024-07-13 00:59:53.376110] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:41.933 [2024-07-13 00:59:53.376152] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605296 ] 00:34:41.933 EAL: No free 2048 kB hugepages reported on node 1 00:34:41.933 [2024-07-13 00:59:53.444816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.933 [2024-07-13 00:59:53.485382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:42.193 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:42.193 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:34:42.193 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:42.193 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:42.193 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:42.193 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:42.193 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:42.452 nvme0n1 00:34:42.452 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:42.452 00:59:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:42.710 Running I/O for 2 seconds... 00:34:44.614 00:34:44.614 Latency(us) 00:34:44.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.614 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:44.614 nvme0n1 : 2.00 25060.21 97.89 0.00 0.00 5102.44 2592.95 16070.57 00:34:44.614 =================================================================================================================== 00:34:44.614 Total : 25060.21 97.89 0.00 0.00 5102.44 2592.95 16070.57 00:34:44.614 0 00:34:44.614 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:44.614 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:44.614 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:44.614 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:44.615 | select(.opcode=="crc32c") 00:34:44.615 | "\(.module_name) \(.executed)"' 00:34:44.615 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:44.873 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:44.873 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:44.873 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:44.873 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:44.873 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1605296 00:34:44.873 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1605296 ']' 00:34:44.873 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1605296 00:34:44.873 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:34:44.873 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:44.873 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1605296 00:34:44.873 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:44.873 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:44.873 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1605296' 00:34:44.873 killing process with pid 1605296 00:34:44.873 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1605296 00:34:44.873 Received shutdown signal, test time was about 2.000000 seconds 00:34:44.873 00:34:44.873 Latency(us) 00:34:44.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.873 =================================================================================================================== 00:34:44.873 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:44.873 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1605296 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1605784 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1605784 /var/tmp/bperf.sock 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1605784 ']' 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:45.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:45.132 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:45.132 [2024-07-13 00:59:56.547641] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:45.132 [2024-07-13 00:59:56.547688] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605784 ] 00:34:45.132 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:45.132 Zero copy mechanism will not be used. 00:34:45.132 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.132 [2024-07-13 00:59:56.614003] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.132 [2024-07-13 00:59:56.650122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.391 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:45.391 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:34:45.391 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:45.391 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:45.391 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:45.391 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:45.391 00:59:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:45.959 nvme0n1 00:34:45.959 00:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:45.959 00:59:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:45.959 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:45.959 Zero copy mechanism will not be used. 00:34:45.959 Running I/O for 2 seconds... 00:34:48.491 00:34:48.491 Latency(us) 00:34:48.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:48.491 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:48.491 nvme0n1 : 2.00 5458.57 682.32 0.00 0.00 2928.22 769.34 10485.76 00:34:48.491 =================================================================================================================== 00:34:48.491 Total : 5458.57 682.32 0.00 0.00 2928.22 769.34 10485.76 00:34:48.491 0 00:34:48.491 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:48.491 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:48.491 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:48.491 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:48.491 | select(.opcode=="crc32c") 00:34:48.491 | "\(.module_name) \(.executed)"' 00:34:48.491 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:48.491 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:48.491 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:48.491 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:48.491 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1605784 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1605784 ']' 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1605784 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1605784 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1605784' 00:34:48.492 killing process with pid 1605784 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1605784 00:34:48.492 Received shutdown signal, test time was about 2.000000 seconds 00:34:48.492 00:34:48.492 Latency(us) 00:34:48.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:48.492 =================================================================================================================== 00:34:48.492 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1605784 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1606259 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1606259 /var/tmp/bperf.sock 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1606259 ']' 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:48.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:48.492 00:59:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:48.492 [2024-07-13 00:59:59.894242] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:48.492 [2024-07-13 00:59:59.894295] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606259 ] 00:34:48.492 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.492 [2024-07-13 00:59:59.958900] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:48.492 [2024-07-13 00:59:59.999548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.492 01:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:48.492 01:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:34:48.492 01:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:48.492 01:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:48.492 01:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:48.751 01:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:48.751 01:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:49.319 nvme0n1 00:34:49.319 01:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:49.319 01:00:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:49.319 Running I/O for 2 seconds... 00:34:51.231 00:34:51.231 Latency(us) 00:34:51.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:51.231 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:51.231 nvme0n1 : 2.00 26801.28 104.69 0.00 0.00 4766.98 4445.05 14474.91 00:34:51.231 =================================================================================================================== 00:34:51.231 Total : 26801.28 104.69 0.00 0.00 4766.98 4445.05 14474.91 00:34:51.231 0 00:34:51.231 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:51.231 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:51.231 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:51.231 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:51.231 | select(.opcode=="crc32c") 00:34:51.231 | "\(.module_name) \(.executed)"' 00:34:51.231 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:51.488 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:51.489 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:51.489 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:51.489 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:51.489 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1606259 00:34:51.489 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1606259 ']' 00:34:51.489 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1606259 00:34:51.489 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:34:51.489 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:51.489 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1606259 00:34:51.489 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:51.489 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:51.489 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1606259' 00:34:51.489 killing process with pid 1606259 00:34:51.489 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1606259 00:34:51.489 Received shutdown signal, test time was about 2.000000 seconds 00:34:51.489 00:34:51.489 Latency(us) 00:34:51.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:51.489 =================================================================================================================== 00:34:51.489 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:51.489 01:00:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1606259 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1606876 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1606876 /var/tmp/bperf.sock 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1606876 ']' 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:51.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:51.746 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:51.746 [2024-07-13 01:00:03.184518] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:51.746 [2024-07-13 01:00:03.184567] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606876 ] 00:34:51.746 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:51.746 Zero copy mechanism will not be used. 00:34:51.746 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.746 [2024-07-13 01:00:03.253731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.746 [2024-07-13 01:00:03.294646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.004 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:52.004 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:34:52.004 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:52.004 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:52.004 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:52.004 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:52.004 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:52.262 nvme0n1 00:34:52.262 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:52.262 01:00:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:52.520 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:52.520 Zero copy mechanism will not be used. 00:34:52.520 Running I/O for 2 seconds... 00:34:54.454 00:34:54.454 Latency(us) 00:34:54.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.454 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:54.454 nvme0n1 : 2.00 6656.13 832.02 0.00 0.00 2400.25 1723.88 7009.50 00:34:54.454 =================================================================================================================== 00:34:54.454 Total : 6656.13 832.02 0.00 0.00 2400.25 1723.88 7009.50 00:34:54.454 0 00:34:54.454 01:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:54.454 01:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:54.454 01:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:54.454 01:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:54.454 | select(.opcode=="crc32c") 00:34:54.454 | "\(.module_name) \(.executed)"' 00:34:54.454 01:00:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:54.712 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:54.712 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:54.712 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:54.712 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:54.712 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1606876 00:34:54.712 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1606876 ']' 00:34:54.712 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1606876 00:34:54.712 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:34:54.712 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:54.712 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1606876 00:34:54.712 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:54.712 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:54.712 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1606876' 00:34:54.712 killing process with pid 1606876 00:34:54.712 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1606876 00:34:54.712 Received shutdown signal, test time was about 2.000000 seconds 00:34:54.712 00:34:54.712 Latency(us) 00:34:54.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.712 =================================================================================================================== 00:34:54.712 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:54.712 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1606876 00:34:54.971 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1605094 00:34:54.971 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1605094 ']' 00:34:54.971 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1605094 00:34:54.971 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:34:54.971 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:54.971 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1605094 00:34:54.971 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:54.971 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:54.971 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1605094' 00:34:54.971 killing process with pid 1605094 00:34:54.971 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1605094 00:34:54.971 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1605094 00:34:55.229 00:34:55.229 real 0m13.592s 00:34:55.229 user 0m25.681s 00:34:55.229 sys 0m4.564s 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:55.229 ************************************ 00:34:55.229 END TEST nvmf_digest_clean 00:34:55.229 ************************************ 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:55.229 ************************************ 00:34:55.229 START TEST nvmf_digest_error 00:34:55.229 ************************************ 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1607570 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1607570 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1607570 ']' 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:55.229 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:55.229 [2024-07-13 01:00:06.681921] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:55.229 [2024-07-13 01:00:06.681963] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.229 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.229 [2024-07-13 01:00:06.752709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.487 [2024-07-13 01:00:06.792608] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:55.487 [2024-07-13 01:00:06.792644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:55.487 [2024-07-13 01:00:06.792651] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:55.487 [2024-07-13 01:00:06.792657] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:55.487 [2024-07-13 01:00:06.792662] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:55.487 [2024-07-13 01:00:06.792680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:55.487 [2024-07-13 01:00:06.861111] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:55.487 null0 00:34:55.487 [2024-07-13 01:00:06.944613] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:55.487 [2024-07-13 01:00:06.968774] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1607593 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1607593 /var/tmp/bperf.sock 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1607593 ']' 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:55.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:55.487 01:00:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:55.487 [2024-07-13 01:00:07.018959] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:55.487 [2024-07-13 01:00:07.019000] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607593 ] 00:34:55.487 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.749 [2024-07-13 01:00:07.087177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.749 [2024-07-13 01:00:07.127155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.749 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:55.749 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:55.749 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:55.749 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:56.007 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:56.007 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.007 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:56.007 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.007 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.007 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.265 nvme0n1 00:34:56.265 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:56.265 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.265 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:56.265 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.265 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:56.266 01:00:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:56.524 Running I/O for 2 seconds... 00:34:56.524 [2024-07-13 01:00:07.920441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:07.920472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:07.920482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.524 [2024-07-13 01:00:07.932577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:07.932603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:07.932613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.524 [2024-07-13 01:00:07.943323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:07.943347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:07.943356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.524 [2024-07-13 01:00:07.951164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:07.951185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:07.951194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.524 [2024-07-13 01:00:07.966781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:07.966802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:07.966810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.524 [2024-07-13 01:00:07.976281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:07.976301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:07.976310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.524 [2024-07-13 01:00:07.988650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:07.988669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:07.988678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.524 [2024-07-13 01:00:07.997091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:07.997111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:07.997119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.524 [2024-07-13 01:00:08.008604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:08.008627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:08.008635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.524 [2024-07-13 01:00:08.018233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:08.018254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:08.018262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.524 [2024-07-13 01:00:08.027849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:08.027870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:08.027878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.524 [2024-07-13 01:00:08.036592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:08.036613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:08.036621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.524 [2024-07-13 01:00:08.046333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:08.046354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:08.046362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.524 [2024-07-13 01:00:08.054349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:08.054369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:08.054377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.524 [2024-07-13 01:00:08.063774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:08.063794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:08.063802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.524 [2024-07-13 01:00:08.073334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.524 [2024-07-13 01:00:08.073355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.524 [2024-07-13 01:00:08.073367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.783 [2024-07-13 01:00:08.084954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.783 [2024-07-13 01:00:08.084976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.783 [2024-07-13 01:00:08.084984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.783 [2024-07-13 01:00:08.094304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.783 [2024-07-13 01:00:08.094323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.783 [2024-07-13 01:00:08.094332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.783 [2024-07-13 01:00:08.103458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.783 [2024-07-13 01:00:08.103479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.783 [2024-07-13 01:00:08.103487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.783 [2024-07-13 01:00:08.112083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.783 [2024-07-13 01:00:08.112103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.783 [2024-07-13 01:00:08.112111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.783 [2024-07-13 01:00:08.121250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.783 [2024-07-13 01:00:08.121270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.783 [2024-07-13 01:00:08.121278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.783 [2024-07-13 01:00:08.130977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.783 [2024-07-13 01:00:08.130997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.783 [2024-07-13 01:00:08.131006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.783 [2024-07-13 01:00:08.139929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.783 [2024-07-13 01:00:08.139950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.783 [2024-07-13 01:00:08.139958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.783 [2024-07-13 01:00:08.149143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.783 [2024-07-13 01:00:08.149163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.783 [2024-07-13 01:00:08.149171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.783 [2024-07-13 01:00:08.158947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.783 [2024-07-13 01:00:08.158970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.783 [2024-07-13 01:00:08.158979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.783 [2024-07-13 01:00:08.168036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.783 [2024-07-13 01:00:08.168056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.783 [2024-07-13 01:00:08.168065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.783 [2024-07-13 01:00:08.177880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.783 [2024-07-13 01:00:08.177900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.783 [2024-07-13 01:00:08.177908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.783 [2024-07-13 01:00:08.186406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.783 [2024-07-13 01:00:08.186426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.783 [2024-07-13 01:00:08.186435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.783 [2024-07-13 01:00:08.197490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.783 [2024-07-13 01:00:08.197510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.783 [2024-07-13 01:00:08.197519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.784 [2024-07-13 01:00:08.207430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.784 [2024-07-13 01:00:08.207450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.784 [2024-07-13 01:00:08.207458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.784 [2024-07-13 01:00:08.216369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.784 [2024-07-13 01:00:08.216388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.784 [2024-07-13 01:00:08.216396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.784 [2024-07-13 01:00:08.227279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.784 [2024-07-13 01:00:08.227300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.784 [2024-07-13 01:00:08.227309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.784 [2024-07-13 01:00:08.237551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.784 [2024-07-13 01:00:08.237571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.784 [2024-07-13 01:00:08.237579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.784 [2024-07-13 01:00:08.245667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.784 [2024-07-13 01:00:08.245687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.784 [2024-07-13 01:00:08.245695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.784 [2024-07-13 01:00:08.255686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.784 [2024-07-13 01:00:08.255706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.784 [2024-07-13 01:00:08.255715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.784 [2024-07-13 01:00:08.264185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.784 [2024-07-13 01:00:08.264206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.784 [2024-07-13 01:00:08.264214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.784 [2024-07-13 01:00:08.274457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.784 [2024-07-13 01:00:08.274477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.784 [2024-07-13 01:00:08.274485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.784 [2024-07-13 01:00:08.284297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.784 [2024-07-13 01:00:08.284318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.784 [2024-07-13 01:00:08.284326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.784 [2024-07-13 01:00:08.293113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.784 [2024-07-13 01:00:08.293134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.784 [2024-07-13 01:00:08.293142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.784 [2024-07-13 01:00:08.303184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.784 [2024-07-13 01:00:08.303204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.784 [2024-07-13 01:00:08.303212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.784 [2024-07-13 01:00:08.311067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.784 [2024-07-13 01:00:08.311087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.784 [2024-07-13 01:00:08.311095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.784 [2024-07-13 01:00:08.323108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.784 [2024-07-13 01:00:08.323128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.784 [2024-07-13 01:00:08.323139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.784 [2024-07-13 01:00:08.331708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:56.784 [2024-07-13 01:00:08.331728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:56.784 [2024-07-13 01:00:08.331736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.043 [2024-07-13 01:00:08.342784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.043 [2024-07-13 01:00:08.342806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.043 [2024-07-13 01:00:08.342815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.043 [2024-07-13 01:00:08.353047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.043 [2024-07-13 01:00:08.353067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.043 [2024-07-13 01:00:08.353076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.043 [2024-07-13 01:00:08.362030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.043 [2024-07-13 01:00:08.362051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.043 [2024-07-13 01:00:08.362059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.043 [2024-07-13 01:00:08.373155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.043 [2024-07-13 01:00:08.373176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.043 [2024-07-13 01:00:08.373186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.043 [2024-07-13 01:00:08.383093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.043 [2024-07-13 01:00:08.383114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.043 [2024-07-13 01:00:08.383123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.043 [2024-07-13 01:00:08.392066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.043 [2024-07-13 01:00:08.392087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.043 [2024-07-13 01:00:08.392098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.043 [2024-07-13 01:00:08.400855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.043 [2024-07-13 01:00:08.400876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.043 [2024-07-13 01:00:08.400885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.043 [2024-07-13 01:00:08.410966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.043 [2024-07-13 01:00:08.410987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.043 [2024-07-13 01:00:08.410995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.043 [2024-07-13 01:00:08.419617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.043 [2024-07-13 01:00:08.419638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.043 [2024-07-13 01:00:08.419646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.043 [2024-07-13 01:00:08.430694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.043 [2024-07-13 01:00:08.430715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.043 [2024-07-13 01:00:08.430723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.439171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.439192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.439200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.450955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.450977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.450985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.462366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.462387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.462395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.470855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.470876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.470884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.482082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.482102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.482110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.494180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.494200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.494211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.502464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.502485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.502493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.514206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.514233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.514242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.526667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.526688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.526697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.534858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.534879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.534888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.546431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.546451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.546459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.557437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.557456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.557465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.566085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.566105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.566114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.578106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.578127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.578136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.586294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.586318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.586327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.044 [2024-07-13 01:00:08.598110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.044 [2024-07-13 01:00:08.598131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.044 [2024-07-13 01:00:08.598139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.606499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.606521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.606530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.616402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.616423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.616432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.625606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.625626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.625635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.634737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.634759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.634767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.644800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.644819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.644827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.654835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.654855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.654864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.663412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.663432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.663440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.672428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.672448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.672457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.682025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.682047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.682055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.691157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.691179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.691187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.701259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.701284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.701292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.713673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.713692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.713701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.724512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.724532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.724541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.733331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.733351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.733359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.746098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.746119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.746128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.758504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.758525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.758536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.767744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.767765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.767774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.777014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.314 [2024-07-13 01:00:08.777034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.314 [2024-07-13 01:00:08.777043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.314 [2024-07-13 01:00:08.785467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.315 [2024-07-13 01:00:08.785487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.315 [2024-07-13 01:00:08.785496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.315 [2024-07-13 01:00:08.795446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.315 [2024-07-13 01:00:08.795468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.315 [2024-07-13 01:00:08.795476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.315 [2024-07-13 01:00:08.804617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.315 [2024-07-13 01:00:08.804638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.315 [2024-07-13 01:00:08.804647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.315 [2024-07-13 01:00:08.813283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.315 [2024-07-13 01:00:08.813304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.315 [2024-07-13 01:00:08.813313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.315 [2024-07-13 01:00:08.823625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.315 [2024-07-13 01:00:08.823645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.315 [2024-07-13 01:00:08.823654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.315 [2024-07-13 01:00:08.833551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.315 [2024-07-13 01:00:08.833571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.315 [2024-07-13 01:00:08.833580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.315 [2024-07-13 01:00:08.842890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.315 [2024-07-13 01:00:08.842914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.315 [2024-07-13 01:00:08.842923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.315 [2024-07-13 01:00:08.852213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.315 [2024-07-13 01:00:08.852246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.315 [2024-07-13 01:00:08.852255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.315 [2024-07-13 01:00:08.860891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.315 [2024-07-13 01:00:08.860913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.315 [2024-07-13 01:00:08.860921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.575 [2024-07-13 01:00:08.872762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.575 [2024-07-13 01:00:08.872785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.575 [2024-07-13 01:00:08.872794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.575 [2024-07-13 01:00:08.882017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.575 [2024-07-13 01:00:08.882039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.575 [2024-07-13 01:00:08.882048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.575 [2024-07-13 01:00:08.892011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.575 [2024-07-13 01:00:08.892035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.575 [2024-07-13 01:00:08.892045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.575 [2024-07-13 01:00:08.901795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.575 [2024-07-13 01:00:08.901818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:08.901827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:08.910910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:08.910932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:08.910940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:08.920793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:08.920814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:08.920823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:08.930551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:08.930572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:08.930580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:08.939042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:08.939063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:08.939071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:08.950818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:08.950839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:08.950847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:08.959117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:08.959138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:08.959146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:08.970161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:08.970182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:08.970191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:08.981090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:08.981111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:08.981119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:08.989523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:08.989544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:08.989552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:09.000829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:09.000851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:09.000859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:09.011181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:09.011202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:09.011214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:09.020371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:09.020393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:09.020402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:09.028836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:09.028856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:09.028865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:09.039216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:09.039246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:09.039255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:09.049842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:09.049862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:09.049871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:09.059453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:09.059475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:09.059484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:09.068970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:09.068992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:09.069000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:09.077595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:09.077616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:09.077625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:09.087823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:09.087844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:09.087852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:09.095971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:09.095993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:09.096001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:09.107560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:09.107582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:09.107591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:09.117186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:09.117208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:09.117216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.576 [2024-07-13 01:00:09.126240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.576 [2024-07-13 01:00:09.126261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.576 [2024-07-13 01:00:09.126270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.834 [2024-07-13 01:00:09.136072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.834 [2024-07-13 01:00:09.136096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.834 [2024-07-13 01:00:09.136104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.834 [2024-07-13 01:00:09.144858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.834 [2024-07-13 01:00:09.144879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.834 [2024-07-13 01:00:09.144888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.834 [2024-07-13 01:00:09.153790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.834 [2024-07-13 01:00:09.153811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.834 [2024-07-13 01:00:09.153820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.834 [2024-07-13 01:00:09.163377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.834 [2024-07-13 01:00:09.163398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.834 [2024-07-13 01:00:09.163407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.834 [2024-07-13 01:00:09.172966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.834 [2024-07-13 01:00:09.172988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.834 [2024-07-13 01:00:09.173000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.834 [2024-07-13 01:00:09.183209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.834 [2024-07-13 01:00:09.183237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.834 [2024-07-13 01:00:09.183246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.834 [2024-07-13 01:00:09.191799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.834 [2024-07-13 01:00:09.191820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.834 [2024-07-13 01:00:09.191829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.834 [2024-07-13 01:00:09.204393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.834 [2024-07-13 01:00:09.204416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.834 [2024-07-13 01:00:09.204425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.834 [2024-07-13 01:00:09.213243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.834 [2024-07-13 01:00:09.213264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.834 [2024-07-13 01:00:09.213273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.834 [2024-07-13 01:00:09.222438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.834 [2024-07-13 01:00:09.222459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.834 [2024-07-13 01:00:09.222469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.834 [2024-07-13 01:00:09.231903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.834 [2024-07-13 01:00:09.231923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.834 [2024-07-13 01:00:09.231932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.834 [2024-07-13 01:00:09.240696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.834 [2024-07-13 01:00:09.240716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.834 [2024-07-13 01:00:09.240724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.834 [2024-07-13 01:00:09.251927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.834 [2024-07-13 01:00:09.251948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.834 [2024-07-13 01:00:09.251957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.834 [2024-07-13 01:00:09.263355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.834 [2024-07-13 01:00:09.263380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.834 [2024-07-13 01:00:09.263388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.834 [2024-07-13 01:00:09.272127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.835 [2024-07-13 01:00:09.272148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.835 [2024-07-13 01:00:09.272155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.835 [2024-07-13 01:00:09.281953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.835 [2024-07-13 01:00:09.281979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.835 [2024-07-13 01:00:09.281987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.835 [2024-07-13 01:00:09.292508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.835 [2024-07-13 01:00:09.292529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.835 [2024-07-13 01:00:09.292537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.835 [2024-07-13 01:00:09.300993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.835 [2024-07-13 01:00:09.301013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.835 [2024-07-13 01:00:09.301021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.835 [2024-07-13 01:00:09.310194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.835 [2024-07-13 01:00:09.310214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.835 [2024-07-13 01:00:09.310222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.835 [2024-07-13 01:00:09.320185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.835 [2024-07-13 01:00:09.320206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.835 [2024-07-13 01:00:09.320214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.835 [2024-07-13 01:00:09.328673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.835 [2024-07-13 01:00:09.328693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.835 [2024-07-13 01:00:09.328701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.835 [2024-07-13 01:00:09.339850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.835 [2024-07-13 01:00:09.339871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.835 [2024-07-13 01:00:09.339880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.835 [2024-07-13 01:00:09.349350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.835 [2024-07-13 01:00:09.349370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.835 [2024-07-13 01:00:09.349379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.835 [2024-07-13 01:00:09.358054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.835 [2024-07-13 01:00:09.358075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.835 [2024-07-13 01:00:09.358083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.835 [2024-07-13 01:00:09.368595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.835 [2024-07-13 01:00:09.368616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.835 [2024-07-13 01:00:09.368624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.835 [2024-07-13 01:00:09.380305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.835 [2024-07-13 01:00:09.380326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.835 [2024-07-13 01:00:09.380335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.835 [2024-07-13 01:00:09.388560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:57.835 [2024-07-13 01:00:09.388580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.835 [2024-07-13 01:00:09.388589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.093 [2024-07-13 01:00:09.400087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.093 [2024-07-13 01:00:09.400108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.093 [2024-07-13 01:00:09.400117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.093 [2024-07-13 01:00:09.408207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.093 [2024-07-13 01:00:09.408233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.093 [2024-07-13 01:00:09.408243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.093 [2024-07-13 01:00:09.418857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.093 [2024-07-13 01:00:09.418877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.093 [2024-07-13 01:00:09.418886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.093 [2024-07-13 01:00:09.429520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.093 [2024-07-13 01:00:09.429541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.093 [2024-07-13 01:00:09.429552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.093 [2024-07-13 01:00:09.438495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.093 [2024-07-13 01:00:09.438516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.093 [2024-07-13 01:00:09.438524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.093 [2024-07-13 01:00:09.447311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.093 [2024-07-13 01:00:09.447332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.093 [2024-07-13 01:00:09.447341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.093 [2024-07-13 01:00:09.457354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.093 [2024-07-13 01:00:09.457375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.093 [2024-07-13 01:00:09.457383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.093 [2024-07-13 01:00:09.466561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.093 [2024-07-13 01:00:09.466581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.093 [2024-07-13 01:00:09.466589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.093 [2024-07-13 01:00:09.475711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.093 [2024-07-13 01:00:09.475731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.093 [2024-07-13 01:00:09.475740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.093 [2024-07-13 01:00:09.485500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.093 [2024-07-13 01:00:09.485520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.093 [2024-07-13 01:00:09.485529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.093 [2024-07-13 01:00:09.494365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.494384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.494393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.094 [2024-07-13 01:00:09.505674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.505695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.505704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.094 [2024-07-13 01:00:09.513835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.513858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.513866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.094 [2024-07-13 01:00:09.526137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.526158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.526167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.094 [2024-07-13 01:00:09.534519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.534540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.534549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.094 [2024-07-13 01:00:09.544395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.544416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.544425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.094 [2024-07-13 01:00:09.552866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.552886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.552895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.094 [2024-07-13 01:00:09.563257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.563278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.563287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.094 [2024-07-13 01:00:09.572628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.572648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.572656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.094 [2024-07-13 01:00:09.582583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.582602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.582611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.094 [2024-07-13 01:00:09.591041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.591061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.591069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.094 [2024-07-13 01:00:09.600843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.600862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.600870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.094 [2024-07-13 01:00:09.611454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.611475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.611483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.094 [2024-07-13 01:00:09.619716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.619737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.619744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.094 [2024-07-13 01:00:09.632136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.632157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.632165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.094 [2024-07-13 01:00:09.644288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.094 [2024-07-13 01:00:09.644309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.094 [2024-07-13 01:00:09.644317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.351 [2024-07-13 01:00:09.655465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.351 [2024-07-13 01:00:09.655486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.351 [2024-07-13 01:00:09.655495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.351 [2024-07-13 01:00:09.664230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.351 [2024-07-13 01:00:09.664250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.351 [2024-07-13 01:00:09.664259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.351 [2024-07-13 01:00:09.675233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.351 [2024-07-13 01:00:09.675254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.351 [2024-07-13 01:00:09.675262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.351 [2024-07-13 01:00:09.683583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.351 [2024-07-13 01:00:09.683604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.351 [2024-07-13 01:00:09.683616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.351 [2024-07-13 01:00:09.694196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.351 [2024-07-13 01:00:09.694217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.351 [2024-07-13 01:00:09.694230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.351 [2024-07-13 01:00:09.703382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.351 [2024-07-13 01:00:09.703401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.351 [2024-07-13 01:00:09.703410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.351 [2024-07-13 01:00:09.712867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.351 [2024-07-13 01:00:09.712888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.351 [2024-07-13 01:00:09.712896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.351 [2024-07-13 01:00:09.722222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.351 [2024-07-13 01:00:09.722248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.351 [2024-07-13 01:00:09.722257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.351 [2024-07-13 01:00:09.731676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.731697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.731705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.739685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.739704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.739713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.751435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.751455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.751463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.763850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.763871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.763879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.776263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.776285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.776294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.786684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.786704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.786713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.797403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.797423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.797431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.806182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.806202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.806211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.818533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.818555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.818563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.826728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.826750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.826758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.838150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.838171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.838179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.850253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.850274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.850282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.858560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.858580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.858592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.870470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.870491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.870499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.879515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.879536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.879544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.888613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.888634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.888642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.898648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.898668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.898676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.352 [2024-07-13 01:00:09.906921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7b9a0) 00:34:58.352 [2024-07-13 01:00:09.906942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:58.352 [2024-07-13 01:00:09.906950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:58.610 00:34:58.610 Latency(us) 00:34:58.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.610 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:58.610 nvme0n1 : 2.00 25826.77 100.89 0.00 0.00 4951.22 2592.95 17894.18 00:34:58.610 =================================================================================================================== 00:34:58.610 Total : 25826.77 100.89 0.00 0.00 4951.22 2592.95 17894.18 00:34:58.610 0 00:34:58.610 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:58.610 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:58.610 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:58.610 | .driver_specific 00:34:58.610 | .nvme_error 00:34:58.610 | .status_code 00:34:58.610 | .command_transient_transport_error' 00:34:58.610 01:00:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:58.610 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 202 > 0 )) 00:34:58.610 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1607593 00:34:58.610 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1607593 ']' 00:34:58.610 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1607593 00:34:58.610 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:58.610 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:58.610 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1607593 00:34:58.610 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:58.610 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:58.610 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1607593' 00:34:58.610 killing process with pid 1607593 00:34:58.610 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1607593 00:34:58.610 Received shutdown signal, test time was about 2.000000 seconds 00:34:58.610 00:34:58.610 Latency(us) 00:34:58.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.610 =================================================================================================================== 00:34:58.610 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:58.610 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1607593 00:34:58.868 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:58.868 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:58.868 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:58.868 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:58.868 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:58.868 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1608460 00:34:58.868 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1608460 /var/tmp/bperf.sock 00:34:58.868 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:58.868 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1608460 ']' 00:34:58.868 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:58.868 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:58.868 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:58.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:58.868 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:58.868 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:58.868 [2024-07-13 01:00:10.368107] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:58.868 [2024-07-13 01:00:10.368158] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608460 ] 00:34:58.868 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:58.868 Zero copy mechanism will not be used. 00:34:58.868 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.126 [2024-07-13 01:00:10.435891] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.126 [2024-07-13 01:00:10.476192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.126 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:59.126 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:59.126 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:59.126 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:59.386 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:59.386 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.386 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:59.386 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.386 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:59.386 01:00:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:59.645 nvme0n1 00:34:59.645 01:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:59.645 01:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.645 01:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:59.645 01:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.645 01:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:59.645 01:00:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:59.645 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:59.645 Zero copy mechanism will not be used. 00:34:59.645 Running I/O for 2 seconds... 00:34:59.645 [2024-07-13 01:00:11.171711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.645 [2024-07-13 01:00:11.171744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.645 [2024-07-13 01:00:11.171755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.645 [2024-07-13 01:00:11.177964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.645 [2024-07-13 01:00:11.177995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.645 [2024-07-13 01:00:11.178004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.645 [2024-07-13 01:00:11.184036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.645 [2024-07-13 01:00:11.184058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.645 [2024-07-13 01:00:11.184067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.645 [2024-07-13 01:00:11.190542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.645 [2024-07-13 01:00:11.190565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.645 [2024-07-13 01:00:11.190574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.645 [2024-07-13 01:00:11.196922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.645 [2024-07-13 01:00:11.196948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.645 [2024-07-13 01:00:11.196957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.905 [2024-07-13 01:00:11.204185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.905 [2024-07-13 01:00:11.204208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-07-13 01:00:11.204216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.905 [2024-07-13 01:00:11.209972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.905 [2024-07-13 01:00:11.209993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-07-13 01:00:11.210001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.905 [2024-07-13 01:00:11.216687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.905 [2024-07-13 01:00:11.216709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.905 [2024-07-13 01:00:11.216717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.222659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.222680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.222688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.228760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.228782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.228790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.235474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.235497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.235505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.241904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.241927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.241935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.248891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.248913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.248922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.257380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.257403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.257412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.264637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.264659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.264667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.272067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.272090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.272098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.278523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.278545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.278553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.283830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.283852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.283860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.289746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.289767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.289775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.295699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.295721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.295729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.301612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.301633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.301641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.307073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.307095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.307106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.312397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.312419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.312427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.318102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.318123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.318132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.324018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.324040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.324047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.329682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.329704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.329712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.335606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.335627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.335635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.341140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.341162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.341170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.346495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.346517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.346525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.352298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.352320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.352328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.358388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.358411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.358419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.364436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.364459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.364467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.370242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.370265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.370273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.374504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.374525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.374534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.378804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.378826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.378834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.383525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.383547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.906 [2024-07-13 01:00:11.383555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.906 [2024-07-13 01:00:11.389429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.906 [2024-07-13 01:00:11.389451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.907 [2024-07-13 01:00:11.389459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.907 [2024-07-13 01:00:11.395175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.907 [2024-07-13 01:00:11.395196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.907 [2024-07-13 01:00:11.395204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.907 [2024-07-13 01:00:11.400759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.907 [2024-07-13 01:00:11.400781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.907 [2024-07-13 01:00:11.400792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.907 [2024-07-13 01:00:11.406282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.907 [2024-07-13 01:00:11.406304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.907 [2024-07-13 01:00:11.406312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.907 [2024-07-13 01:00:11.411612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.907 [2024-07-13 01:00:11.411634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.907 [2024-07-13 01:00:11.411642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.907 [2024-07-13 01:00:11.416893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.907 [2024-07-13 01:00:11.416915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.907 [2024-07-13 01:00:11.416923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.907 [2024-07-13 01:00:11.422220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.907 [2024-07-13 01:00:11.422250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.907 [2024-07-13 01:00:11.422258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.907 [2024-07-13 01:00:11.428688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.907 [2024-07-13 01:00:11.428712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.907 [2024-07-13 01:00:11.428720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.907 [2024-07-13 01:00:11.434169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.907 [2024-07-13 01:00:11.434191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.907 [2024-07-13 01:00:11.434199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.907 [2024-07-13 01:00:11.439700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.907 [2024-07-13 01:00:11.439722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.907 [2024-07-13 01:00:11.439729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:59.907 [2024-07-13 01:00:11.445043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.907 [2024-07-13 01:00:11.445065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.907 [2024-07-13 01:00:11.445073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:59.907 [2024-07-13 01:00:11.450367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.907 [2024-07-13 01:00:11.450392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.907 [2024-07-13 01:00:11.450400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:59.907 [2024-07-13 01:00:11.455337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.907 [2024-07-13 01:00:11.455358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.907 [2024-07-13 01:00:11.455366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:59.907 [2024-07-13 01:00:11.460580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:34:59.907 [2024-07-13 01:00:11.460602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:59.907 [2024-07-13 01:00:11.460610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.193 [2024-07-13 01:00:11.465877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.193 [2024-07-13 01:00:11.465902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.193 [2024-07-13 01:00:11.465910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.193 [2024-07-13 01:00:11.471199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.193 [2024-07-13 01:00:11.471221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.193 [2024-07-13 01:00:11.471237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.193 [2024-07-13 01:00:11.476090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.476112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.476120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.481414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.481436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.481443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.486576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.486596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.486604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.491778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.491799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.491806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.497154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.497175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.497184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.502599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.502620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.502628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.508114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.508135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.508143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.513490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.513511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.513519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.518800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.518822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.518829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.524102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.524123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.524131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.529393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.529414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.529422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.534730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.534752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.534760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.540175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.540195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.540206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.545516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.545537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.545545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.550880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.550901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.550909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.556145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.556167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.556174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.561452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.561472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.561480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.566746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.566767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.566775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.572006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.572028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.572035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.577394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.577416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.577423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.582677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.582698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.582706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.588038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.588062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.588070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.593356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.593378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.593386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.598755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.598776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.598784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.604125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.604146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.604154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.609419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.609440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.609448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.614714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.614735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.194 [2024-07-13 01:00:11.614742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.194 [2024-07-13 01:00:11.620001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.194 [2024-07-13 01:00:11.620022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.620030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.625303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.625324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.625332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.630641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.630663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.630670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.636099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.636121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.636128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.641615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.641636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.641644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.647108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.647130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.647138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.652624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.652645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.652653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.657935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.657957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.657965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.663334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.663354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.663363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.668664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.668685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.668692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.673995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.674016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.674024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.680767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.680789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.680800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.686472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.686493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.686502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.691852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.691874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.691882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.697203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.697229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.697238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.702589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.702609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.702616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.708410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.708431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.708439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.715168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.715190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.715199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.722275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.722296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.722304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.729601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.729623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.729631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.737262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.737291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.737300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.195 [2024-07-13 01:00:11.745090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.195 [2024-07-13 01:00:11.745113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.195 [2024-07-13 01:00:11.745122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.456 [2024-07-13 01:00:11.752016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.456 [2024-07-13 01:00:11.752039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.456 [2024-07-13 01:00:11.752048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.456 [2024-07-13 01:00:11.758123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.456 [2024-07-13 01:00:11.758145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.456 [2024-07-13 01:00:11.758154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.456 [2024-07-13 01:00:11.764106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.456 [2024-07-13 01:00:11.764131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.456 [2024-07-13 01:00:11.764140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.456 [2024-07-13 01:00:11.769881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.456 [2024-07-13 01:00:11.769902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.456 [2024-07-13 01:00:11.769912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.456 [2024-07-13 01:00:11.775660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.456 [2024-07-13 01:00:11.775683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.456 [2024-07-13 01:00:11.775691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.456 [2024-07-13 01:00:11.781434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.456 [2024-07-13 01:00:11.781457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.456 [2024-07-13 01:00:11.781464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.456 [2024-07-13 01:00:11.787133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.456 [2024-07-13 01:00:11.787155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.456 [2024-07-13 01:00:11.787163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.456 [2024-07-13 01:00:11.792749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.456 [2024-07-13 01:00:11.792771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.456 [2024-07-13 01:00:11.792780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.456 [2024-07-13 01:00:11.798270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.456 [2024-07-13 01:00:11.798291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.456 [2024-07-13 01:00:11.798299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.456 [2024-07-13 01:00:11.803830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.456 [2024-07-13 01:00:11.803852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.456 [2024-07-13 01:00:11.803860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.456 [2024-07-13 01:00:11.809362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.456 [2024-07-13 01:00:11.809385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.456 [2024-07-13 01:00:11.809393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.456 [2024-07-13 01:00:11.814867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.456 [2024-07-13 01:00:11.814889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.456 [2024-07-13 01:00:11.814897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.456 [2024-07-13 01:00:11.820391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.456 [2024-07-13 01:00:11.820413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.820421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.826012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.826034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.826042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.831758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.831779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.831787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.837434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.837457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.837468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.842974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.842996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.843003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.848605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.848627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.848635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.854292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.854313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.854321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.860012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.860034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.860042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.865683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.865705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.865713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.871495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.871516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.871524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.877146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.877168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.877176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.882914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.882936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.882944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.888600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.888622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.888630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.894325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.894346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.894354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.900060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.900082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.900089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.905748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.905770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.905778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.911241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.911262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.911270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.916703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.916725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.916733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.922184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.922206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.922213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.927653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.927674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.927682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.934498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.934520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.934531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.940027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.940049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.940057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.946196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.946219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.946233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.953408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.953431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.953439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.960571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.960593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.960602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.968176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.968199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.968207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.976083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.976105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.976114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.984081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.984104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.984113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.991968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.991992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.457 [2024-07-13 01:00:11.992000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.457 [2024-07-13 01:00:11.999778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.457 [2024-07-13 01:00:11.999803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.458 [2024-07-13 01:00:11.999812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.458 [2024-07-13 01:00:12.007808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.458 [2024-07-13 01:00:12.007831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.458 [2024-07-13 01:00:12.007840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.718 [2024-07-13 01:00:12.015588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.718 [2024-07-13 01:00:12.015612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.718 [2024-07-13 01:00:12.015621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.718 [2024-07-13 01:00:12.023322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.718 [2024-07-13 01:00:12.023346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.718 [2024-07-13 01:00:12.023355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.718 [2024-07-13 01:00:12.031849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.718 [2024-07-13 01:00:12.031873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.718 [2024-07-13 01:00:12.031882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.718 [2024-07-13 01:00:12.040246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.718 [2024-07-13 01:00:12.040268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.718 [2024-07-13 01:00:12.040277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.718 [2024-07-13 01:00:12.048162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.718 [2024-07-13 01:00:12.048186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.718 [2024-07-13 01:00:12.048194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.718 [2024-07-13 01:00:12.055989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.718 [2024-07-13 01:00:12.056011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.718 [2024-07-13 01:00:12.056020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.718 [2024-07-13 01:00:12.063958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.718 [2024-07-13 01:00:12.063980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.718 [2024-07-13 01:00:12.063988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.718 [2024-07-13 01:00:12.071611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.718 [2024-07-13 01:00:12.071634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.718 [2024-07-13 01:00:12.071643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.718 [2024-07-13 01:00:12.078532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.718 [2024-07-13 01:00:12.078555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.718 [2024-07-13 01:00:12.078564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.718 [2024-07-13 01:00:12.084956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.718 [2024-07-13 01:00:12.084978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.718 [2024-07-13 01:00:12.084987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.718 [2024-07-13 01:00:12.092042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.718 [2024-07-13 01:00:12.092064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.718 [2024-07-13 01:00:12.092072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.718 [2024-07-13 01:00:12.098984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.718 [2024-07-13 01:00:12.099007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.718 [2024-07-13 01:00:12.099015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.718 [2024-07-13 01:00:12.107328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.718 [2024-07-13 01:00:12.107358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.718 [2024-07-13 01:00:12.107367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.718 [2024-07-13 01:00:12.115428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.718 [2024-07-13 01:00:12.115452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.718 [2024-07-13 01:00:12.115460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.718 [2024-07-13 01:00:12.123787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.123811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.123820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.131733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.131755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.131768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.139770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.139794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.139802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.147537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.147560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.147568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.154018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.154041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.154049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.160669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.160691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.160699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.166936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.166959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.166968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.173199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.173219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.173240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.180442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.180464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.180473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.188355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.188377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.188385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.194643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.194669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.194677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.200620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.200642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.200650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.206920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.206941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.206950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.212679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.212701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.212709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.218372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.218393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.218401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.223962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.223984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.223992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.229459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.229480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.229487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.234985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.235006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.235014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.240471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.240492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.240501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.246032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.246054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.246062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.251503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.251525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.251533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.256801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.256823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.256831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.262071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.262092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.262100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.267476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.267497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.267505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.271136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.271157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.271165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.719 [2024-07-13 01:00:12.275697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.719 [2024-07-13 01:00:12.275720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.719 [2024-07-13 01:00:12.275729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.281404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.281426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.281434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.287157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.287179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.287190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.292826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.292848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.292856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.298321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.298342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.298349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.303780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.303802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.303810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.309388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.309409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.309417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.314733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.314754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.314762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.319739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.319760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.319768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.324400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.324421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.324429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.329659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.329680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.329688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.335587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.335612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.335620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.340923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.340944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.340952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.346292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.346313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.346322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.351759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.351780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.351788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.357282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.357303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.357311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.362696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.362717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.362724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.368100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.368121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.368129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.373513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.980 [2024-07-13 01:00:12.373534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.980 [2024-07-13 01:00:12.373542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.980 [2024-07-13 01:00:12.378871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.378893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.378901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.384402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.384424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.384433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.389985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.390007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.390016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.395602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.395624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.395631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.401113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.401135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.401143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.406447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.406468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.406476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.411908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.411929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.411937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.417411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.417432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.417440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.422940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.422962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.422969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.428501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.428521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.428532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.433955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.433976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.433985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.440409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.440430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.440438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.445750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.445772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.445781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.452820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.452843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.452852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.459637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.459659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.459667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.466525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.466547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.466555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.473978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.474000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.474008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.481022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.481046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.481054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.487395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.487415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.487423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.491719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.491741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.491750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.498952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.498976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.498984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.505264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.505286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.505294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.511060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.511083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.511091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.517081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.517102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.517109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.523569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.523590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.523599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:00.981 [2024-07-13 01:00:12.531185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:00.981 [2024-07-13 01:00:12.531207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.981 [2024-07-13 01:00:12.531215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.242 [2024-07-13 01:00:12.538393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.242 [2024-07-13 01:00:12.538416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.242 [2024-07-13 01:00:12.538432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.242 [2024-07-13 01:00:12.544968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.242 [2024-07-13 01:00:12.544990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.242 [2024-07-13 01:00:12.544999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.242 [2024-07-13 01:00:12.551090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.242 [2024-07-13 01:00:12.551111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.242 [2024-07-13 01:00:12.551119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.242 [2024-07-13 01:00:12.556966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.242 [2024-07-13 01:00:12.556987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.242 [2024-07-13 01:00:12.556995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.242 [2024-07-13 01:00:12.563161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.242 [2024-07-13 01:00:12.563182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.242 [2024-07-13 01:00:12.563190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.242 [2024-07-13 01:00:12.568685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.242 [2024-07-13 01:00:12.568706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.242 [2024-07-13 01:00:12.568714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.242 [2024-07-13 01:00:12.574702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.242 [2024-07-13 01:00:12.574724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.242 [2024-07-13 01:00:12.574731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.242 [2024-07-13 01:00:12.580290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.242 [2024-07-13 01:00:12.580311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.242 [2024-07-13 01:00:12.580318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.242 [2024-07-13 01:00:12.585873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.242 [2024-07-13 01:00:12.585894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.242 [2024-07-13 01:00:12.585902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.242 [2024-07-13 01:00:12.591633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.242 [2024-07-13 01:00:12.591659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.242 [2024-07-13 01:00:12.591666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.242 [2024-07-13 01:00:12.597574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.597597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.597605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.603313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.603334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.603342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.608819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.608840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.608848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.614369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.614389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.614397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.619909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.619930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.619937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.625431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.625452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.625460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.630732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.630753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.630761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.636173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.636193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.636202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.641331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.641353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.641361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.646507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.646528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.646536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.651784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.651805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.651813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.657053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.657074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.657082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.662435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.662456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.662464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.667949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.667971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.667979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.673669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.673690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.673698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.679053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.679074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.679082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.684768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.684791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.684802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.690662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.690684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.690693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.696154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.696176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.696185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.701508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.701529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.701537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.706840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.706861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.706869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.712219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.712246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.712255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.717674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.717696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.717704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.723183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.723205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.723213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.728759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.728779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.728787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.734273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.734298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.734307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.739814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.739836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.739844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.745596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.745618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.745625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.751405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.243 [2024-07-13 01:00:12.751428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.243 [2024-07-13 01:00:12.751436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.243 [2024-07-13 01:00:12.756958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.244 [2024-07-13 01:00:12.756978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.244 [2024-07-13 01:00:12.756986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.244 [2024-07-13 01:00:12.759963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.244 [2024-07-13 01:00:12.759984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.244 [2024-07-13 01:00:12.759992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.244 [2024-07-13 01:00:12.765423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.244 [2024-07-13 01:00:12.765443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.244 [2024-07-13 01:00:12.765451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.244 [2024-07-13 01:00:12.770835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.244 [2024-07-13 01:00:12.770856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.244 [2024-07-13 01:00:12.770864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.244 [2024-07-13 01:00:12.776117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.244 [2024-07-13 01:00:12.776136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.244 [2024-07-13 01:00:12.776144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.244 [2024-07-13 01:00:12.780996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.244 [2024-07-13 01:00:12.781017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.244 [2024-07-13 01:00:12.781026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.244 [2024-07-13 01:00:12.786259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.244 [2024-07-13 01:00:12.786280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.244 [2024-07-13 01:00:12.786288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.244 [2024-07-13 01:00:12.791158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.244 [2024-07-13 01:00:12.791179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.244 [2024-07-13 01:00:12.791187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.244 [2024-07-13 01:00:12.796333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.244 [2024-07-13 01:00:12.796353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.244 [2024-07-13 01:00:12.796361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.801423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.801445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.801454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.806548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.806570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.806578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.811669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.811692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.811700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.816915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.816937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.816945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.822059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.822080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.822092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.827281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.827301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.827309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.832466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.832487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.832496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.837374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.837395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.837403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.842656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.842678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.842686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.848333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.848355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.848363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.853825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.853846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.853855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.860005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.860026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.860034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.866137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.866158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.866166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.872051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.872073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.872080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.877923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.877944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.877952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.884273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.884294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.884302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.890341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.890363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.890371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.896574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.896596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.896605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.902467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.902489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.902497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.503 [2024-07-13 01:00:12.908545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.503 [2024-07-13 01:00:12.908566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.503 [2024-07-13 01:00:12.908573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:12.914812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:12.914835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:12.914843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:12.920839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:12.920860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:12.920872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:12.926938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:12.926959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:12.926967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:12.932617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:12.932638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:12.932646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:12.938546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:12.938567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:12.938575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:12.944668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:12.944690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:12.944698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:12.950809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:12.950830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:12.950839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:12.957499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:12.957521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:12.957529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:12.963310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:12.963332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:12.963340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:12.969209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:12.969236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:12.969244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:12.975276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:12.975302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:12.975310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:12.981273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:12.981295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:12.981303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:12.987635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:12.987656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:12.987664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:12.993429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:12.993450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:12.993458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:12.999346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:12.999367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:12.999375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:13.005548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:13.005569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:13.005577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:13.011428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:13.011449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:13.011457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:13.017450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:13.017471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:13.017479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:13.023443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:13.023464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:13.023472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:13.029635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:13.029657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:13.029665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:13.035997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:13.036018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:13.036026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:13.040088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:13.040109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:13.040116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:13.044789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:13.044810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:13.044817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:13.050676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:13.050697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:13.050706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.504 [2024-07-13 01:00:13.056579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.504 [2024-07-13 01:00:13.056601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.504 [2024-07-13 01:00:13.056609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.062210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.062239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.062247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.068044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.068066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.068075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.074101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.074122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.074134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.079655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.079676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.079684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.085574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.085596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.085605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.091276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.091299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.091307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.097078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.097102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.097111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.103503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.103526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.103535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.109472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.109493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.109502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.116068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.116091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.116099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.121818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.121839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.121848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.128689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.128715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.128723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.135857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.135879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.135887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.142832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.142854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.142863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.149094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.149116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.149124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.154909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.154930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.154938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.160809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.160830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.160838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:01.763 [2024-07-13 01:00:13.166037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd140) 00:35:01.763 [2024-07-13 01:00:13.166059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:01.763 [2024-07-13 01:00:13.166066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:01.763 00:35:01.763 Latency(us) 00:35:01.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.764 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:01.764 nvme0n1 : 2.00 5264.48 658.06 0.00 0.00 3036.37 480.83 9289.02 00:35:01.764 =================================================================================================================== 00:35:01.764 Total : 5264.48 658.06 0.00 0.00 3036.37 480.83 9289.02 00:35:01.764 0 00:35:01.764 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:01.764 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:01.764 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:01.764 | .driver_specific 00:35:01.764 | .nvme_error 00:35:01.764 | .status_code 00:35:01.764 | .command_transient_transport_error' 00:35:01.764 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:02.022 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 339 > 0 )) 00:35:02.022 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1608460 00:35:02.022 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1608460 ']' 00:35:02.022 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1608460 00:35:02.022 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:35:02.022 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:02.022 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1608460 00:35:02.022 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:02.022 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:02.022 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1608460' 00:35:02.022 killing process with pid 1608460 00:35:02.022 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1608460 00:35:02.022 Received shutdown signal, test time was about 2.000000 seconds 00:35:02.022 00:35:02.022 Latency(us) 00:35:02.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.022 =================================================================================================================== 00:35:02.022 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:02.022 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1608460 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1609120 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1609120 /var/tmp/bperf.sock 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1609120 ']' 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:02.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:02.281 [2024-07-13 01:00:13.630287] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:02.281 [2024-07-13 01:00:13.630336] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609120 ] 00:35:02.281 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.281 [2024-07-13 01:00:13.698638] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.281 [2024-07-13 01:00:13.739533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:02.281 01:00:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:02.539 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:02.539 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.539 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:02.539 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.539 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:02.539 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:02.798 nvme0n1 00:35:02.798 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:02.798 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.798 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:02.798 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.798 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:02.798 01:00:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:02.798 Running I/O for 2 seconds... 00:35:02.798 [2024-07-13 01:00:14.352356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:02.798 [2024-07-13 01:00:14.352535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:02.798 [2024-07-13 01:00:14.352563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.057 [2024-07-13 01:00:14.362056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.057 [2024-07-13 01:00:14.362232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.057 [2024-07-13 01:00:14.362255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.057 [2024-07-13 01:00:14.371638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.057 [2024-07-13 01:00:14.371794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.057 [2024-07-13 01:00:14.371814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.057 [2024-07-13 01:00:14.381139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.057 [2024-07-13 01:00:14.381324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.057 [2024-07-13 01:00:14.381345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.057 [2024-07-13 01:00:14.390693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.057 [2024-07-13 01:00:14.390855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.057 [2024-07-13 01:00:14.390875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.057 [2024-07-13 01:00:14.400289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.057 [2024-07-13 01:00:14.400455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.057 [2024-07-13 01:00:14.400474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.057 [2024-07-13 01:00:14.409833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.057 [2024-07-13 01:00:14.409995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.057 [2024-07-13 01:00:14.410014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.057 [2024-07-13 01:00:14.419331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.057 [2024-07-13 01:00:14.419493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.057 [2024-07-13 01:00:14.419513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.057 [2024-07-13 01:00:14.428885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.057 [2024-07-13 01:00:14.429048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.057 [2024-07-13 01:00:14.429067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.057 [2024-07-13 01:00:14.438415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.057 [2024-07-13 01:00:14.438575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.057 [2024-07-13 01:00:14.438596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.057 [2024-07-13 01:00:14.447906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.057 [2024-07-13 01:00:14.448067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.057 [2024-07-13 01:00:14.448086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.057 [2024-07-13 01:00:14.457437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.057 [2024-07-13 01:00:14.457600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.057 [2024-07-13 01:00:14.457619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.057 [2024-07-13 01:00:14.466906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.057 [2024-07-13 01:00:14.467068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.057 [2024-07-13 01:00:14.467087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.057 [2024-07-13 01:00:14.476426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.057 [2024-07-13 01:00:14.476584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.057 [2024-07-13 01:00:14.476603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.058 [2024-07-13 01:00:14.485845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.058 [2024-07-13 01:00:14.485997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.058 [2024-07-13 01:00:14.486016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.058 [2024-07-13 01:00:14.495306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.058 [2024-07-13 01:00:14.495467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.058 [2024-07-13 01:00:14.495486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.058 [2024-07-13 01:00:14.504889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.058 [2024-07-13 01:00:14.505053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.058 [2024-07-13 01:00:14.505071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.058 [2024-07-13 01:00:14.514342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.058 [2024-07-13 01:00:14.514502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.058 [2024-07-13 01:00:14.514521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.058 [2024-07-13 01:00:14.523832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.058 [2024-07-13 01:00:14.523990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.058 [2024-07-13 01:00:14.524010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.058 [2024-07-13 01:00:14.533288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.058 [2024-07-13 01:00:14.533449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.058 [2024-07-13 01:00:14.533467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.058 [2024-07-13 01:00:14.542755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.058 [2024-07-13 01:00:14.542915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.058 [2024-07-13 01:00:14.542937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.058 [2024-07-13 01:00:14.552270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.058 [2024-07-13 01:00:14.552430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.058 [2024-07-13 01:00:14.552450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.058 [2024-07-13 01:00:14.561752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.058 [2024-07-13 01:00:14.561912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.058 [2024-07-13 01:00:14.561931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.058 [2024-07-13 01:00:14.571192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.058 [2024-07-13 01:00:14.571359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.058 [2024-07-13 01:00:14.571379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.058 [2024-07-13 01:00:14.580720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.058 [2024-07-13 01:00:14.580877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.058 [2024-07-13 01:00:14.580897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.058 [2024-07-13 01:00:14.590170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.058 [2024-07-13 01:00:14.590335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.058 [2024-07-13 01:00:14.590355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.058 [2024-07-13 01:00:14.599661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.058 [2024-07-13 01:00:14.599837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.058 [2024-07-13 01:00:14.599857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.058 [2024-07-13 01:00:14.609232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.058 [2024-07-13 01:00:14.609393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.058 [2024-07-13 01:00:14.609412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.317 [2024-07-13 01:00:14.618919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.317 [2024-07-13 01:00:14.619081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.317 [2024-07-13 01:00:14.619101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.317 [2024-07-13 01:00:14.628433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.317 [2024-07-13 01:00:14.628595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.628618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.637945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.638104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.638124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.647406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.647568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.647587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.656919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.657071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.657090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.666348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.666510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.666530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.675851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.676009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.676028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.685743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.685905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.685924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.695290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.695455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.695475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.705114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.705283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.705303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.714633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.714801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.714821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.724045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.724204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.724228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.733598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.733758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.733777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.743090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.743253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.743272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.752556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.752717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.752736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.762115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.762285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.762306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.771590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.771748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.771767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.781096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.781264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.781284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.790606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.790765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.790784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.800098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.800271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.800290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.809667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.809826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.809846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.819329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.819488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.819507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.828817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.828976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.828994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.838375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.838534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.838553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.847850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.848009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.848029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.857370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.318 [2024-07-13 01:00:14.857534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.318 [2024-07-13 01:00:14.857553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.318 [2024-07-13 01:00:14.866889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.319 [2024-07-13 01:00:14.867048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.319 [2024-07-13 01:00:14.867067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:14.876627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:14.876792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:14.876814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:14.886314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:14.886475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:14.886495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:14.895791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:14.895951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:14.895971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:14.905390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:14.905553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:14.905573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:14.914940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:14.915100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:14.915120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:14.924427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:14.924587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:14.924605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:14.933936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:14.934097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:14.934116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:14.943464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:14.943624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:14.943643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:14.952949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:14.953108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:14.953127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:14.962495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:14.962659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:14.962679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:14.972014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:14.972174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:14.972194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:14.981527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:14.981686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:14.981705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:14.991012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:14.991175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:14.991195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:15.000462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:15.000626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:15.000646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:15.009993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:15.010153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:15.010172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:15.019535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:15.019694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:15.019713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:15.028998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:15.029158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:15.029178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:15.038554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:15.038712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:15.038731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:15.048039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:15.048201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:15.048222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:15.057514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:15.057674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:15.057694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.578 [2024-07-13 01:00:15.067065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.578 [2024-07-13 01:00:15.067229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.578 [2024-07-13 01:00:15.067248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.579 [2024-07-13 01:00:15.076562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.579 [2024-07-13 01:00:15.076720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.579 [2024-07-13 01:00:15.076740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.579 [2024-07-13 01:00:15.086045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.579 [2024-07-13 01:00:15.086202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.579 [2024-07-13 01:00:15.086221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.579 [2024-07-13 01:00:15.095573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.579 [2024-07-13 01:00:15.095731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.579 [2024-07-13 01:00:15.095750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.579 [2024-07-13 01:00:15.105091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.579 [2024-07-13 01:00:15.105249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.579 [2024-07-13 01:00:15.105268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.579 [2024-07-13 01:00:15.114645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.579 [2024-07-13 01:00:15.114804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.579 [2024-07-13 01:00:15.114823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.579 [2024-07-13 01:00:15.124133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.579 [2024-07-13 01:00:15.124301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.579 [2024-07-13 01:00:15.124320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.579 [2024-07-13 01:00:15.133795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.579 [2024-07-13 01:00:15.133958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.579 [2024-07-13 01:00:15.133977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.838 [2024-07-13 01:00:15.143465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.838 [2024-07-13 01:00:15.143627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.838 [2024-07-13 01:00:15.143646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.838 [2024-07-13 01:00:15.152949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.838 [2024-07-13 01:00:15.153106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.838 [2024-07-13 01:00:15.153126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.838 [2024-07-13 01:00:15.162442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.838 [2024-07-13 01:00:15.162603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.838 [2024-07-13 01:00:15.162622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.838 [2024-07-13 01:00:15.171978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.838 [2024-07-13 01:00:15.172135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.838 [2024-07-13 01:00:15.172154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.838 [2024-07-13 01:00:15.181424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.838 [2024-07-13 01:00:15.181583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.838 [2024-07-13 01:00:15.181603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.838 [2024-07-13 01:00:15.190967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.838 [2024-07-13 01:00:15.191129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.838 [2024-07-13 01:00:15.191149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.838 [2024-07-13 01:00:15.200506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.838 [2024-07-13 01:00:15.200678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.838 [2024-07-13 01:00:15.200697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.838 [2024-07-13 01:00:15.210019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.838 [2024-07-13 01:00:15.210177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.838 [2024-07-13 01:00:15.210200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.838 [2024-07-13 01:00:15.219522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.838 [2024-07-13 01:00:15.219682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.219701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.229023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.229182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.229201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.238530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.238691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.238710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.248062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.248222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.248246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.257541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.257701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.257720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.267079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.267242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.267262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.276578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.276738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.276756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.286042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.286202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.286221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.295564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.295727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.295747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.305102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.305268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.305287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.314557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.314718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.314737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.324109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.324278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.324298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.333598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.333759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.333778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.343108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.343275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.343293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.352594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.352754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.352772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.362058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.362218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.362242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.371599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.371757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.371775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.381078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.381239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.381257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:03.839 [2024-07-13 01:00:15.390686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:03.839 [2024-07-13 01:00:15.390848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:03.839 [2024-07-13 01:00:15.390867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.400292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.400452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.400472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.409822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.409984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.410003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.419330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.419489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.419508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.428824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.428983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.429003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.438308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.438468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.438488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.447816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.447974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.447993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.457336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.457501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.457520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.466817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.466975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.466994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.476353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.476511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.476530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.485808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.485967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.485986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.495319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.495478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.495497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.504862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.505021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.505040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.514312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.514472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.514491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.523829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.523987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.524006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.533336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.533497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.533516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.542815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.542976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.542999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.552322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.552482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.552501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.561802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.561961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.561980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.571286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.571447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.571467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.580801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.580961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.580980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.590328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.590489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.099 [2024-07-13 01:00:15.590508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.099 [2024-07-13 01:00:15.599851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.099 [2024-07-13 01:00:15.600011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.100 [2024-07-13 01:00:15.600031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.100 [2024-07-13 01:00:15.609354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.100 [2024-07-13 01:00:15.609513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.100 [2024-07-13 01:00:15.609532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.100 [2024-07-13 01:00:15.618725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.100 [2024-07-13 01:00:15.618885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.100 [2024-07-13 01:00:15.618904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.100 [2024-07-13 01:00:15.628214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.100 [2024-07-13 01:00:15.628398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.100 [2024-07-13 01:00:15.628416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.100 [2024-07-13 01:00:15.637710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.100 [2024-07-13 01:00:15.637873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.100 [2024-07-13 01:00:15.637892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.100 [2024-07-13 01:00:15.647301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.100 [2024-07-13 01:00:15.647460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.100 [2024-07-13 01:00:15.647479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.100 [2024-07-13 01:00:15.656878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.100 [2024-07-13 01:00:15.657037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.100 [2024-07-13 01:00:15.657056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.358 [2024-07-13 01:00:15.666527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.358 [2024-07-13 01:00:15.666687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.358 [2024-07-13 01:00:15.666707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.676043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.676204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.676227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.685713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.685876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.685895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.695309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.695468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.695487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.704885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.705045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.705064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.714356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.714519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.714538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.723852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.724012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.724031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.733354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.733516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.733534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.742843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.743002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.743021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.752340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.752502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.752521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.761830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.761988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.762007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.771291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.771450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.771469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.780831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.780992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.781012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.790304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.790463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.790482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.799808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.799968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.799987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.809382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.809540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.809559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.818867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.819026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.819045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.828317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.828477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.828496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.837778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.837937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.837957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.847258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.847421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.847440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.856794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.856959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.856977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.866276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.866434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.866452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.875760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.875919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.875941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.885274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.885433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.885452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.894730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.894888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.894907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.904438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.904597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.904616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.359 [2024-07-13 01:00:15.913962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.359 [2024-07-13 01:00:15.914125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.359 [2024-07-13 01:00:15.914144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.618 [2024-07-13 01:00:15.923718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.618 [2024-07-13 01:00:15.923878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.618 [2024-07-13 01:00:15.923896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.618 [2024-07-13 01:00:15.933264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.618 [2024-07-13 01:00:15.933424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.618 [2024-07-13 01:00:15.933443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.618 [2024-07-13 01:00:15.942704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.618 [2024-07-13 01:00:15.942862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.618 [2024-07-13 01:00:15.942882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.618 [2024-07-13 01:00:15.952205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.618 [2024-07-13 01:00:15.952371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.618 [2024-07-13 01:00:15.952391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.618 [2024-07-13 01:00:15.961715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.618 [2024-07-13 01:00:15.961880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.618 [2024-07-13 01:00:15.961899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.618 [2024-07-13 01:00:15.971199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.618 [2024-07-13 01:00:15.971367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.618 [2024-07-13 01:00:15.971386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.618 [2024-07-13 01:00:15.980690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.618 [2024-07-13 01:00:15.980850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.618 [2024-07-13 01:00:15.980868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.618 [2024-07-13 01:00:15.990187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.618 [2024-07-13 01:00:15.990355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.618 [2024-07-13 01:00:15.990376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.618 [2024-07-13 01:00:15.999653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:15.999812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:15.999831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.009199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.009364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.009381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.018671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.018830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.018848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.028138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.028304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.028323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.037645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.037806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.037825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.047128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.047295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.047314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.056640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.056799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.056817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.066134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.066298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.066318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.075525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.075685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.075705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.085057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.085217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.085242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.094556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.094718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.094737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.104114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.104285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.104306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.113637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.113798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.113817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.123119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.123291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.123311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.132644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.132804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.132823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.142291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.142459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.142480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.152097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.152268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.152288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.161801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.161959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.161978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.619 [2024-07-13 01:00:16.171289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.619 [2024-07-13 01:00:16.171449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.619 [2024-07-13 01:00:16.171468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.878 [2024-07-13 01:00:16.180909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.878 [2024-07-13 01:00:16.181068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.878 [2024-07-13 01:00:16.181087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.878 [2024-07-13 01:00:16.190469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.878 [2024-07-13 01:00:16.190631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.878 [2024-07-13 01:00:16.190651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.200022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.200185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.200204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.209593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.209755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.209778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.219103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.219270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.219290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.228584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.228747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.228767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.238121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.238288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.238307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.247571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.247735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.247754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.256992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.257161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.257180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.266491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.266650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.266669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.275999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.276160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.276180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.285509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.285670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.285689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.295077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.295242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.295261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.304607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.304767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.304785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.314151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.314318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.314337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.323649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.323811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.323830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.333148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.333315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.333334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 [2024-07-13 01:00:16.342678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9b7ce0) with pdu=0x2000190fd640 00:35:04.879 [2024-07-13 01:00:16.342838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:04.879 [2024-07-13 01:00:16.342857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:04.879 00:35:04.879 Latency(us) 00:35:04.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:04.879 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:04.879 nvme0n1 : 2.00 26821.36 104.77 0.00 0.00 4763.82 1980.33 9972.87 00:35:04.879 =================================================================================================================== 00:35:04.879 Total : 26821.36 104.77 0.00 0.00 4763.82 1980.33 9972.87 00:35:04.879 0 00:35:04.879 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:04.879 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:04.879 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:04.879 | .driver_specific 00:35:04.879 | .nvme_error 00:35:04.879 | .status_code 00:35:04.879 | .command_transient_transport_error' 00:35:04.879 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:05.138 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 210 > 0 )) 00:35:05.138 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1609120 00:35:05.138 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1609120 ']' 00:35:05.138 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1609120 00:35:05.138 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:35:05.138 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:05.138 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1609120 00:35:05.138 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:05.138 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:05.138 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1609120' 00:35:05.138 killing process with pid 1609120 00:35:05.138 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1609120 00:35:05.138 Received shutdown signal, test time was about 2.000000 seconds 00:35:05.138 00:35:05.138 Latency(us) 00:35:05.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.138 =================================================================================================================== 00:35:05.138 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:05.138 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1609120 00:35:05.398 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:05.398 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:05.398 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:05.398 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:05.398 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:05.398 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1609619 00:35:05.398 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1609619 /var/tmp/bperf.sock 00:35:05.398 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:05.398 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1609619 ']' 00:35:05.398 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:05.398 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:05.398 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:05.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:05.398 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:05.398 01:00:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.398 [2024-07-13 01:00:16.820050] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:05.398 [2024-07-13 01:00:16.820100] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609619 ] 00:35:05.398 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:05.398 Zero copy mechanism will not be used. 00:35:05.398 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.398 [2024-07-13 01:00:16.888791] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.398 [2024-07-13 01:00:16.929208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.657 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:05.657 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:35:05.657 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:05.657 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:05.657 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:05.657 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.657 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.657 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.657 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.658 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:06.227 nvme0n1 00:35:06.227 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:06.227 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.227 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:06.227 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.227 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:06.227 01:00:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:06.227 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:06.227 Zero copy mechanism will not be used. 00:35:06.227 Running I/O for 2 seconds... 00:35:06.227 [2024-07-13 01:00:17.644553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.644938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.644966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.650321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.650705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.650729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.655715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.656087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.656111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.661504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.661893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.661914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.668142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.668522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.668544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.673786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.673860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.673879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.679484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.679849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.679869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.685095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.685497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.685518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.690362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.690734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.690754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.695417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.695791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.695811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.700521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.700893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.700914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.705538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.705911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.705931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.710733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.711105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.711129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.715727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.716089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.716109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.720746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.721129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.721149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.725813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.726189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.726210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.731124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.731509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.731529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.736484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.736859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.736879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.742800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.743164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.743184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.748424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.748796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.748818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.753954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.754313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.754333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.759177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.759552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.759573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.764273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.764620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.227 [2024-07-13 01:00:17.764640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.227 [2024-07-13 01:00:17.769364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.227 [2024-07-13 01:00:17.769714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.228 [2024-07-13 01:00:17.769734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.228 [2024-07-13 01:00:17.774594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.228 [2024-07-13 01:00:17.774945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.228 [2024-07-13 01:00:17.774965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.228 [2024-07-13 01:00:17.779888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.228 [2024-07-13 01:00:17.780236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.228 [2024-07-13 01:00:17.780256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.488 [2024-07-13 01:00:17.785122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.488 [2024-07-13 01:00:17.785468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.488 [2024-07-13 01:00:17.785488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.488 [2024-07-13 01:00:17.790287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.488 [2024-07-13 01:00:17.790642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.488 [2024-07-13 01:00:17.790662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.488 [2024-07-13 01:00:17.795483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.488 [2024-07-13 01:00:17.795822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.488 [2024-07-13 01:00:17.795842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.488 [2024-07-13 01:00:17.800483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.488 [2024-07-13 01:00:17.800843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.488 [2024-07-13 01:00:17.800865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.488 [2024-07-13 01:00:17.805322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.488 [2024-07-13 01:00:17.805675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.488 [2024-07-13 01:00:17.805694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.488 [2024-07-13 01:00:17.810259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.810614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.810634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.815037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.815379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.815400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.819714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.820069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.820090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.824378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.824734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.824754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.829109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.829451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.829471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.833912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.834241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.834261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.838585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.838920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.838941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.843344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.843681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.843701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.847950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.848283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.848303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.852553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.852867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.852887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.857217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.857551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.857570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.861881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.862218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.862244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.866462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.866799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.866819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.871182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.871513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.871533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.875915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.876239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.876259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.880697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.881019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.881040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.885615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.885936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.885955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.891582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.891922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.891942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.897074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.897411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.897431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.902172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.902515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.902535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.907178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.907507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.907527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.912195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.912535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.912555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.917187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.917507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.917527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.921994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.922330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.922350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.927009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.927341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.927365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.931861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.932191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.932211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.937176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.937505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.937525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.942817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.943152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.943171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.948470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.489 [2024-07-13 01:00:17.948794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.489 [2024-07-13 01:00:17.948815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.489 [2024-07-13 01:00:17.953286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:17.953623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:17.953643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:17.958169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:17.958506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:17.958526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:17.963073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:17.963407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:17.963426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:17.967691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:17.968006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:17.968026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:17.972675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:17.973012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:17.973032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:17.977546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:17.977862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:17.977881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:17.982437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:17.982768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:17.982788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:17.987353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:17.987678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:17.987698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:17.993400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:17.993738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:17.993758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:17.998849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:17.999188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:17.999208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:18.003982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:18.004312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:18.004332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:18.008872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:18.009195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:18.009215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:18.013815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:18.014134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:18.014153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:18.018612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:18.018931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:18.018951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:18.023509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:18.023845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:18.023865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:18.028657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:18.028991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:18.029011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:18.034504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:18.034833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:18.034853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:18.039740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:18.040069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:18.040089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.490 [2024-07-13 01:00:18.044709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.490 [2024-07-13 01:00:18.045038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.490 [2024-07-13 01:00:18.045058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.750 [2024-07-13 01:00:18.050058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.750 [2024-07-13 01:00:18.050395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.050415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.055670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.055990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.056010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.060617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.060954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.060977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.065854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.066183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.066203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.071624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.071938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.071958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.077316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.077659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.077679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.082751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.083089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.083109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.088548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.088873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.088892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.094580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.094907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.094926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.099577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.099909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.099929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.104733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.105053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.105073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.110248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.110588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.110608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.115677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.116005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.116024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.121433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.121768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.121788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.127197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.127535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.127554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.132826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.133164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.133184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.138407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.138742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.138761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.144668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.145001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.145020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.150022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.150355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.150374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.155736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.156068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.156088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.161178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.161498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.161518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.167241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.167567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.167587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.173088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.173434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.173454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.178707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.179044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.179064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.184420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.184794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.184814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.191136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.191471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.191492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.196736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.197066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.197086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.201854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.202175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.202195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.206872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.751 [2024-07-13 01:00:18.207202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.751 [2024-07-13 01:00:18.207230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.751 [2024-07-13 01:00:18.211888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.212211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.212236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.216835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.217172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.217191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.221996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.222330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.222350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.227029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.227356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.227376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.231869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.232180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.232199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.237063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.237397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.237416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.242743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.243078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.243098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.248954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.249285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.249304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.254424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.254743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.254763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.259390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.259730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.259750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.264521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.264850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.264869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.269577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.269898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.269918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.274602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.274937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.274956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.279439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.279770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.279790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.284381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.284691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.284711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.289583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.289918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.289938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.295413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.295742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.295762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.300944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.301273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.301293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:06.752 [2024-07-13 01:00:18.305990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:06.752 [2024-07-13 01:00:18.306322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.752 [2024-07-13 01:00:18.306343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.311566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.311901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.311921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.316955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.317290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.317309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.322441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.322779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.322799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.328535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.328875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.328895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.334874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.335209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.335233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.340036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.340375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.340395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.345001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.345338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.345365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.349749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.350085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.350105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.354435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.354767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.354786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.359335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.359665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.359685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.363970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.364307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.364327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.368795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.369132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.369151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.373659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.373976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.373996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.378429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.378764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.378784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.383169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.383501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.383521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.387965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.388287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.388307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.392689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.393025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.393045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.397262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.397590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.397610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.401832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.402153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.402173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.406671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.406982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.407001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.411675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.013 [2024-07-13 01:00:18.412005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.013 [2024-07-13 01:00:18.412024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.013 [2024-07-13 01:00:18.416400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.416710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.416729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.421232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.421569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.421589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.426067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.426404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.426424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.430850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.431175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.431194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.435453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.435788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.435808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.440219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.440556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.440576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.445266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.445588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.445608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.450659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.450996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.451016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.455361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.455691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.455710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.460157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.460485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.460505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.464810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.465147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.465167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.470004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.470332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.470356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.476323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.476703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.476723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.483205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.483585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.483605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.490650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.491038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.491058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.498369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.498790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.498811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.505719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.506160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.506180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.513170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.513599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.513620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.520285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.520658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.520678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.527485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.527890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.527910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.534483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.534837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.534857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.541125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.541453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.541473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.548120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.548438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.548458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.554424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.554759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.554779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.559900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.560194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.560213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.564936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.565199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.565219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.014 [2024-07-13 01:00:18.569168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.014 [2024-07-13 01:00:18.569444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.014 [2024-07-13 01:00:18.569463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.275 [2024-07-13 01:00:18.574190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.275 [2024-07-13 01:00:18.574503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.275 [2024-07-13 01:00:18.574525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.275 [2024-07-13 01:00:18.579552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.275 [2024-07-13 01:00:18.579849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.275 [2024-07-13 01:00:18.579869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.275 [2024-07-13 01:00:18.584376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.275 [2024-07-13 01:00:18.584621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.275 [2024-07-13 01:00:18.584640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.275 [2024-07-13 01:00:18.589511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.275 [2024-07-13 01:00:18.589731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.275 [2024-07-13 01:00:18.589750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.275 [2024-07-13 01:00:18.593479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.275 [2024-07-13 01:00:18.593704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.593723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.597441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.597671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.597691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.601408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.601629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.601649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.605349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.605574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.605593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.609275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.609504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.609524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.613317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.613532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.613552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.617937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.618164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.618196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.622741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.622966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.622986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.627081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.627305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.627323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.631441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.631650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.631670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.635638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.635866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.635886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.640093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.640321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.640340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.644410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.644624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.644644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.648441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.648667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.648685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.652599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.652808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.652828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.656676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.656897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.656917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.661506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.661746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.661766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.666734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.666949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.666968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.671002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.671232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.671251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.675223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.675461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.675481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.679245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.679464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.679483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.683598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.683826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.683846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.687542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.687760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.687780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.691465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.691691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.691714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.695336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.695569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.695588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.699235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.699455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.699474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.703104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.703334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.703352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.707019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.707235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.276 [2024-07-13 01:00:18.707254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.276 [2024-07-13 01:00:18.710900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.276 [2024-07-13 01:00:18.711113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.711133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.714794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.715016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.715036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.718672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.718879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.718898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.722596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.722816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.722836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.726474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.726685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.726706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.730332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.730560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.730579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.734202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.734441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.734461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.738083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.738302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.738329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.742107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.742335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.742355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.746016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.746238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.746258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.749889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.750104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.750123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.753772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.753995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.754015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.757634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.757852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.757872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.761473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.761703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.761722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.765362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.765586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.765606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.769288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.769527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.769547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.773723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.773945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.773964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.779248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.779465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.779484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.784814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.785055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.785075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.790751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.790992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.791011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.796933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.797197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.797216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.802420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.802688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.802711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.807572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.807830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.807850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.813054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.813305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.813324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.818414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.818659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.818678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.823641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.823904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.823923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.277 [2024-07-13 01:00:18.829003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.277 [2024-07-13 01:00:18.829255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.277 [2024-07-13 01:00:18.829275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.540 [2024-07-13 01:00:18.834486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.540 [2024-07-13 01:00:18.834722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.834743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.839619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.839871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.839892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.844729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.844948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.844967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.850013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.850242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.850262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.856759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.856972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.856992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.863215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.863457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.863477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.868312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.868529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.868548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.872279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.872511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.872531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.876321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.876553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.876572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.880278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.880503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.880523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.884237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.884450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.884469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.888164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.888392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.888412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.892123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.892351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.892370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.896064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.896285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.896304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.899982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.900204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.900223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.903897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.904114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.904133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.907895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.908114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.908133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.911847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.912075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.912095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.915796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.916026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.916046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.919719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.919941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.919960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.923624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.923849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.923873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.927550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.927773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.927792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.931462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.931683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.931702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.935381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.935601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.935619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.939288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.939512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.939532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.943191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.943430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.943450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.947082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.947301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.947320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.950998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.951211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.951236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.954888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.541 [2024-07-13 01:00:18.955114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.541 [2024-07-13 01:00:18.955134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.541 [2024-07-13 01:00:18.958855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:18.959091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:18.959111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:18.962816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:18.963035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:18.963055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:18.966716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:18.966930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:18.966949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:18.970725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:18.970949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:18.970969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:18.975120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:18.975349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:18.975369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:18.979061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:18.979278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:18.979297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:18.982991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:18.983216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:18.983243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:18.986921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:18.987149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:18.987168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:18.991087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:18.991306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:18.991324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:18.995822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:18.996051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:18.996071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.000978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.001196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.001214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.005251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.005484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.005504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.010029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.010249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.010269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.014089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.014325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.014346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.018087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.018315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.018335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.022097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.022324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.022345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.026075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.026308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.026328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.030070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.030302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.030327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.034061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.034294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.034312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.038209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.038442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.038462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.042464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.042685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.042706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.046692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.046924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.046944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.050916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.051131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.051152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.055404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.055625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.055645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.059655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.059881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.059902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.063891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.064119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.064139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.068123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.068363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.068383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.072377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.072596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.072616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.076760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.542 [2024-07-13 01:00:19.076984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.542 [2024-07-13 01:00:19.077004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.542 [2024-07-13 01:00:19.080994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.543 [2024-07-13 01:00:19.081217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.543 [2024-07-13 01:00:19.081242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.543 [2024-07-13 01:00:19.085220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.543 [2024-07-13 01:00:19.085448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.543 [2024-07-13 01:00:19.085468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.543 [2024-07-13 01:00:19.089329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.543 [2024-07-13 01:00:19.089541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.543 [2024-07-13 01:00:19.089562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.543 [2024-07-13 01:00:19.093558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.543 [2024-07-13 01:00:19.093786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.543 [2024-07-13 01:00:19.093805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.099211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.099485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.842 [2024-07-13 01:00:19.099505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.104275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.104510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.842 [2024-07-13 01:00:19.104530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.108567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.108802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.842 [2024-07-13 01:00:19.108822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.112821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.113056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.842 [2024-07-13 01:00:19.113076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.116921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.117148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.842 [2024-07-13 01:00:19.117168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.121120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.121348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.842 [2024-07-13 01:00:19.121369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.125706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.125928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.842 [2024-07-13 01:00:19.125947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.130395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.130633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.842 [2024-07-13 01:00:19.130653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.135359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.135630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.842 [2024-07-13 01:00:19.135650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.140849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.141078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.842 [2024-07-13 01:00:19.141099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.145698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.145914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.842 [2024-07-13 01:00:19.145938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.150911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.151142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.842 [2024-07-13 01:00:19.151162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.155584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.155808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.842 [2024-07-13 01:00:19.155828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.160260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.160509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.842 [2024-07-13 01:00:19.160529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.165653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.165900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.842 [2024-07-13 01:00:19.165920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.842 [2024-07-13 01:00:19.170459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.842 [2024-07-13 01:00:19.170679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.170699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.175732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.175959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.175979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.180555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.180779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.180798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.184819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.185056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.185076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.190394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.190749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.190768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.196306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.196565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.196585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.202293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.202520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.202541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.209169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.209500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.209521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.216399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.216699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.216719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.222109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.222332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.222350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.227938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.228165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.228185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.232645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.232876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.232897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.236770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.237000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.237021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.240899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.241124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.241144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.244986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.245222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.245247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.248974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.249205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.249230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.253023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.253252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.253272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.257033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.257255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.257273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.261107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.261334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.261354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.265875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.266096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.266116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.270588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.270811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.270831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.275212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.275453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.275476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.279890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.280128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.280148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.285470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.285694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.285713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.290426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.290649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.290668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.294931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.295159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.295180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.299247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.299477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.299496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.303315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.303533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.303552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.307610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.307839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.307858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.843 [2024-07-13 01:00:19.311753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.843 [2024-07-13 01:00:19.311981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.843 [2024-07-13 01:00:19.312001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.315966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.316188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.316208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.320489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.320719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.320739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.324801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.325030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.325050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.329056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.329279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.329299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.333236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.333463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.333484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.337406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.337633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.337653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.341619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.341844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.341864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.346187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.346419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.346439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.350446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.350667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.350687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.354555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.354775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.354795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.358800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.359027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.359047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.362977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.363203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.363223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.367073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.367298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.367318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.371605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.371830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.371850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.375879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.376105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.376125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.380187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.380408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.380428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.384460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.384691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.384711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:07.844 [2024-07-13 01:00:19.388788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:07.844 [2024-07-13 01:00:19.389019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.844 [2024-07-13 01:00:19.389043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.393109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.393337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.393357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.397424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.397648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.397669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.401659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.401888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.401908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.405925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.406148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.406166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.410218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.410444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.410464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.414488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.414733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.414752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.418725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.418950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.418970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.423448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.423667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.423686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.427764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.427989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.428009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.432029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.432255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.432273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.436241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.436476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.436495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.440470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.440689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.440708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.444797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.445028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.445047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.449029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.449253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.449272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.453267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.453485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.453504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.457456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.457671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.457689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.462209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.462444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.462466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.466310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.466524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.466543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.470474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.470695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.470714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.474568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.474788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.474807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.478738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.478949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.104 [2024-07-13 01:00:19.478968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.104 [2024-07-13 01:00:19.482949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.104 [2024-07-13 01:00:19.483169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.483188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.486993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.487219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.487244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.491003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.491235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.491255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.495123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.495339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.495358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.499756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.499996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.500015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.504901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.505110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.505130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.509161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.509382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.509400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.513393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.513608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.513627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.517689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.517917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.517936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.521885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.522108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.522127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.526393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.526621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.526641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.530650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.530871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.530891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.534950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.535170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.535189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.539075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.539294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.539313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.543288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.543510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.543529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.547539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.547773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.547791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.551615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.551829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.551848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.555878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.556082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.556101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.560068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.560287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.560306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.564178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.564402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.564421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.568056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.568278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.568297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.571919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.572142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.572165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.575801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.576027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.576047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.579672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.579897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.579917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.583551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.583767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.583785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.587382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.587608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.587628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.591312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.591536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.591556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.595216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.595441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.595460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.599078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.599306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.105 [2024-07-13 01:00:19.599324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.105 [2024-07-13 01:00:19.602935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.105 [2024-07-13 01:00:19.603155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.106 [2024-07-13 01:00:19.603175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.106 [2024-07-13 01:00:19.606809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.106 [2024-07-13 01:00:19.607030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.106 [2024-07-13 01:00:19.607050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.106 [2024-07-13 01:00:19.610668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.106 [2024-07-13 01:00:19.610896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.106 [2024-07-13 01:00:19.610916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.106 [2024-07-13 01:00:19.614556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.106 [2024-07-13 01:00:19.614782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.106 [2024-07-13 01:00:19.614801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.106 [2024-07-13 01:00:19.618414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.106 [2024-07-13 01:00:19.618643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.106 [2024-07-13 01:00:19.618662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.106 [2024-07-13 01:00:19.622284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.106 [2024-07-13 01:00:19.622508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.106 [2024-07-13 01:00:19.622526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.106 [2024-07-13 01:00:19.626118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.106 [2024-07-13 01:00:19.626334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.106 [2024-07-13 01:00:19.626353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.106 [2024-07-13 01:00:19.629991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.106 [2024-07-13 01:00:19.630216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.106 [2024-07-13 01:00:19.630240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.106 [2024-07-13 01:00:19.633842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.106 [2024-07-13 01:00:19.634057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.106 [2024-07-13 01:00:19.634084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.106 [2024-07-13 01:00:19.637723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.106 [2024-07-13 01:00:19.637940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.106 [2024-07-13 01:00:19.637959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.106 [2024-07-13 01:00:19.641562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaacdd0) with pdu=0x2000190fef90 00:35:08.106 [2024-07-13 01:00:19.641711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.106 [2024-07-13 01:00:19.641729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.106 00:35:08.106 Latency(us) 00:35:08.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.106 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:08.106 nvme0n1 : 2.00 6508.40 813.55 0.00 0.00 2454.51 1716.76 7864.32 00:35:08.106 =================================================================================================================== 00:35:08.106 Total : 6508.40 813.55 0.00 0.00 2454.51 1716.76 7864.32 00:35:08.106 0 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:08.365 | .driver_specific 00:35:08.365 | .nvme_error 00:35:08.365 | .status_code 00:35:08.365 | .command_transient_transport_error' 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 420 > 0 )) 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1609619 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1609619 ']' 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1609619 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1609619 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1609619' 00:35:08.365 killing process with pid 1609619 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1609619 00:35:08.365 Received shutdown signal, test time was about 2.000000 seconds 00:35:08.365 00:35:08.365 Latency(us) 00:35:08.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.365 =================================================================================================================== 00:35:08.365 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:08.365 01:00:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1609619 00:35:08.624 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1607570 00:35:08.624 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1607570 ']' 00:35:08.624 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1607570 00:35:08.624 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:35:08.624 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:08.624 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1607570 00:35:08.624 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:08.624 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:08.624 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1607570' 00:35:08.624 killing process with pid 1607570 00:35:08.624 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1607570 00:35:08.624 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1607570 00:35:08.883 00:35:08.883 real 0m13.652s 00:35:08.883 user 0m25.733s 00:35:08.883 sys 0m4.558s 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.883 ************************************ 00:35:08.883 END TEST nvmf_digest_error 00:35:08.883 ************************************ 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:08.883 rmmod nvme_tcp 00:35:08.883 rmmod nvme_fabrics 00:35:08.883 rmmod nvme_keyring 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1607570 ']' 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1607570 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1607570 ']' 00:35:08.883 01:00:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1607570 00:35:08.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1607570) - No such process 00:35:08.884 01:00:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1607570 is not found' 00:35:08.884 Process with pid 1607570 is not found 00:35:08.884 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:08.884 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:08.884 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:08.884 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:08.884 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:08.884 01:00:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.884 01:00:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:08.884 01:00:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:11.422 01:00:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:11.422 00:35:11.422 real 0m35.321s 00:35:11.422 user 0m53.077s 00:35:11.422 sys 0m13.537s 00:35:11.422 01:00:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:11.422 01:00:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:11.422 ************************************ 00:35:11.422 END TEST nvmf_digest 00:35:11.422 ************************************ 00:35:11.422 01:00:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:11.422 01:00:22 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:35:11.422 01:00:22 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:35:11.422 01:00:22 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:35:11.422 01:00:22 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:11.422 01:00:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:11.422 01:00:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:11.422 01:00:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:11.422 ************************************ 00:35:11.422 START TEST nvmf_bdevperf 00:35:11.422 ************************************ 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:11.422 * Looking for test storage... 00:35:11.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:11.422 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:35:11.423 01:00:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:16.692 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:16.692 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:35:16.692 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:16.692 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:16.692 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:16.692 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:16.692 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:16.692 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:35:16.692 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:16.692 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:16.693 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:16.693 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:16.693 Found net devices under 0000:86:00.0: cvl_0_0 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:16.693 Found net devices under 0000:86:00.1: cvl_0_1 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:16.693 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:16.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:16.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:35:16.952 00:35:16.952 --- 10.0.0.2 ping statistics --- 00:35:16.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.952 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:16.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:16.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:35:16.952 00:35:16.952 --- 10.0.0.1 ping statistics --- 00:35:16.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.952 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1613615 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1613615 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1613615 ']' 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:16.952 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:16.952 [2024-07-13 01:00:28.394982] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:16.952 [2024-07-13 01:00:28.395026] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:16.952 EAL: No free 2048 kB hugepages reported on node 1 00:35:16.952 [2024-07-13 01:00:28.463034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:16.952 [2024-07-13 01:00:28.503737] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:16.952 [2024-07-13 01:00:28.503776] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:16.952 [2024-07-13 01:00:28.503783] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:16.952 [2024-07-13 01:00:28.503789] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:16.952 [2024-07-13 01:00:28.503795] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:16.952 [2024-07-13 01:00:28.503907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:16.952 [2024-07-13 01:00:28.504013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.952 [2024-07-13 01:00:28.504014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:17.211 [2024-07-13 01:00:28.632540] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:17.211 Malloc0 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:17.211 [2024-07-13 01:00:28.689875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:17.211 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:17.211 { 00:35:17.211 "params": { 00:35:17.211 "name": "Nvme$subsystem", 00:35:17.211 "trtype": "$TEST_TRANSPORT", 00:35:17.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:17.212 "adrfam": "ipv4", 00:35:17.212 "trsvcid": "$NVMF_PORT", 00:35:17.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:17.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:17.212 "hdgst": ${hdgst:-false}, 00:35:17.212 "ddgst": ${ddgst:-false} 00:35:17.212 }, 00:35:17.212 "method": "bdev_nvme_attach_controller" 00:35:17.212 } 00:35:17.212 EOF 00:35:17.212 )") 00:35:17.212 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:17.212 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:17.212 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:17.212 01:00:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:17.212 "params": { 00:35:17.212 "name": "Nvme1", 00:35:17.212 "trtype": "tcp", 00:35:17.212 "traddr": "10.0.0.2", 00:35:17.212 "adrfam": "ipv4", 00:35:17.212 "trsvcid": "4420", 00:35:17.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:17.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:17.212 "hdgst": false, 00:35:17.212 "ddgst": false 00:35:17.212 }, 00:35:17.212 "method": "bdev_nvme_attach_controller" 00:35:17.212 }' 00:35:17.212 [2024-07-13 01:00:28.738745] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:17.212 [2024-07-13 01:00:28.738787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613646 ] 00:35:17.212 EAL: No free 2048 kB hugepages reported on node 1 00:35:17.470 [2024-07-13 01:00:28.806244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.471 [2024-07-13 01:00:28.846296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.729 Running I/O for 1 seconds... 00:35:18.665 00:35:18.665 Latency(us) 00:35:18.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.665 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:18.665 Verification LBA range: start 0x0 length 0x4000 00:35:18.665 Nvme1n1 : 1.00 11038.97 43.12 0.00 0.00 11548.93 2222.53 14132.98 00:35:18.665 =================================================================================================================== 00:35:18.665 Total : 11038.97 43.12 0.00 0.00 11548.93 2222.53 14132.98 00:35:18.924 01:00:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1613873 00:35:18.924 01:00:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:18.924 01:00:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:18.924 01:00:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:18.924 01:00:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:18.924 01:00:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:18.924 01:00:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:18.924 01:00:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:18.924 { 00:35:18.924 "params": { 00:35:18.924 "name": "Nvme$subsystem", 00:35:18.924 "trtype": "$TEST_TRANSPORT", 00:35:18.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:18.924 "adrfam": "ipv4", 00:35:18.924 "trsvcid": "$NVMF_PORT", 00:35:18.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:18.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:18.924 "hdgst": ${hdgst:-false}, 00:35:18.924 "ddgst": ${ddgst:-false} 00:35:18.924 }, 00:35:18.924 "method": "bdev_nvme_attach_controller" 00:35:18.924 } 00:35:18.924 EOF 00:35:18.924 )") 00:35:18.924 01:00:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:18.924 01:00:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:18.924 01:00:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:18.924 01:00:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:18.924 "params": { 00:35:18.924 "name": "Nvme1", 00:35:18.924 "trtype": "tcp", 00:35:18.924 "traddr": "10.0.0.2", 00:35:18.924 "adrfam": "ipv4", 00:35:18.924 "trsvcid": "4420", 00:35:18.924 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:18.924 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:18.924 "hdgst": false, 00:35:18.924 "ddgst": false 00:35:18.924 }, 00:35:18.924 "method": "bdev_nvme_attach_controller" 00:35:18.924 }' 00:35:18.924 [2024-07-13 01:00:30.269826] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:18.924 [2024-07-13 01:00:30.269874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613873 ] 00:35:18.924 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.924 [2024-07-13 01:00:30.337271] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.924 [2024-07-13 01:00:30.374702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.183 Running I/O for 15 seconds... 00:35:21.715 01:00:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1613615 00:35:21.715 01:00:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:21.715 [2024-07-13 01:00:33.239142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.715 [2024-07-13 01:00:33.239231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.715 [2024-07-13 01:00:33.239253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.715 [2024-07-13 01:00:33.239268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.715 [2024-07-13 01:00:33.239289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.715 [2024-07-13 01:00:33.239306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.715 [2024-07-13 01:00:33.239322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.715 [2024-07-13 01:00:33.239617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.715 [2024-07-13 01:00:33.239739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.715 [2024-07-13 01:00:33.239747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.239990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.239996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.240988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.240994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.241003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.241009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.241018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.241024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.241032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.241038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.241046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.241053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.716 [2024-07-13 01:00:33.241062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.716 [2024-07-13 01:00:33.241068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.717 [2024-07-13 01:00:33.241082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.717 [2024-07-13 01:00:33.241099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.717 [2024-07-13 01:00:33.241115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.717 [2024-07-13 01:00:33.241129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.717 [2024-07-13 01:00:33.241145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.717 [2024-07-13 01:00:33.241160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.717 [2024-07-13 01:00:33.241175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.717 [2024-07-13 01:00:33.241189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.717 [2024-07-13 01:00:33.241205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.717 [2024-07-13 01:00:33.241220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.717 [2024-07-13 01:00:33.241241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.717 [2024-07-13 01:00:33.241255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.717 [2024-07-13 01:00:33.241269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.717 [2024-07-13 01:00:33.241284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.717 [2024-07-13 01:00:33.241299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8464d0 is same with the state(5) to be set 00:35:21.717 [2024-07-13 01:00:33.241315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:21.717 [2024-07-13 01:00:33.241320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:21.717 [2024-07-13 01:00:33.241326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109768 len:8 PRP1 0x0 PRP2 0x0 00:35:21.717 [2024-07-13 01:00:33.241334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.717 [2024-07-13 01:00:33.241375] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8464d0 was disconnected and freed. reset controller. 00:35:21.717 [2024-07-13 01:00:33.244240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.717 [2024-07-13 01:00:33.244291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.717 [2024-07-13 01:00:33.244909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.717 [2024-07-13 01:00:33.244950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.717 [2024-07-13 01:00:33.244972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.717 [2024-07-13 01:00:33.245570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.717 [2024-07-13 01:00:33.246000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.717 [2024-07-13 01:00:33.246009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.717 [2024-07-13 01:00:33.246016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.717 [2024-07-13 01:00:33.252310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.717 [2024-07-13 01:00:33.259461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.717 [2024-07-13 01:00:33.259989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.717 [2024-07-13 01:00:33.260011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.717 [2024-07-13 01:00:33.260022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.717 [2024-07-13 01:00:33.260283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.717 [2024-07-13 01:00:33.260539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.717 [2024-07-13 01:00:33.260552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.717 [2024-07-13 01:00:33.260563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.717 [2024-07-13 01:00:33.264615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.717 [2024-07-13 01:00:33.272376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.717 [2024-07-13 01:00:33.272770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.717 [2024-07-13 01:00:33.272787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.717 [2024-07-13 01:00:33.272794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.975 [2024-07-13 01:00:33.272978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.975 [2024-07-13 01:00:33.273149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.975 [2024-07-13 01:00:33.273159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.975 [2024-07-13 01:00:33.273166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.975 [2024-07-13 01:00:33.275931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.975 [2024-07-13 01:00:33.285309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.975 [2024-07-13 01:00:33.285628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.975 [2024-07-13 01:00:33.285645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.975 [2024-07-13 01:00:33.285652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.975 [2024-07-13 01:00:33.285815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.975 [2024-07-13 01:00:33.285978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.975 [2024-07-13 01:00:33.285987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.975 [2024-07-13 01:00:33.285993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.975 [2024-07-13 01:00:33.288684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.975 [2024-07-13 01:00:33.298163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.975 [2024-07-13 01:00:33.298513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.975 [2024-07-13 01:00:33.298530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.975 [2024-07-13 01:00:33.298537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.975 [2024-07-13 01:00:33.298701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.975 [2024-07-13 01:00:33.298864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.975 [2024-07-13 01:00:33.298873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.975 [2024-07-13 01:00:33.298880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.975 [2024-07-13 01:00:33.301569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.975 [2024-07-13 01:00:33.310995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.975 [2024-07-13 01:00:33.311434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.975 [2024-07-13 01:00:33.311478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.975 [2024-07-13 01:00:33.311500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.975 [2024-07-13 01:00:33.312049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.975 [2024-07-13 01:00:33.312213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.975 [2024-07-13 01:00:33.312223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.975 [2024-07-13 01:00:33.312236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.975 [2024-07-13 01:00:33.314918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.975 [2024-07-13 01:00:33.323788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.976 [2024-07-13 01:00:33.324208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.976 [2024-07-13 01:00:33.324229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.976 [2024-07-13 01:00:33.324236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.976 [2024-07-13 01:00:33.324425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.976 [2024-07-13 01:00:33.324598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.976 [2024-07-13 01:00:33.324608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.976 [2024-07-13 01:00:33.324614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.976 [2024-07-13 01:00:33.327355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.976 [2024-07-13 01:00:33.336679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.976 [2024-07-13 01:00:33.337098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.976 [2024-07-13 01:00:33.337133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.976 [2024-07-13 01:00:33.337157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.976 [2024-07-13 01:00:33.337695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.976 [2024-07-13 01:00:33.337870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.976 [2024-07-13 01:00:33.337879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.976 [2024-07-13 01:00:33.337886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.976 [2024-07-13 01:00:33.340581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.976 [2024-07-13 01:00:33.349606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.976 [2024-07-13 01:00:33.350043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.976 [2024-07-13 01:00:33.350085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.976 [2024-07-13 01:00:33.350107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.976 [2024-07-13 01:00:33.350699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.976 [2024-07-13 01:00:33.351197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.976 [2024-07-13 01:00:33.351207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.976 [2024-07-13 01:00:33.351213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.976 [2024-07-13 01:00:33.353830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.976 [2024-07-13 01:00:33.362389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.976 [2024-07-13 01:00:33.362810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.976 [2024-07-13 01:00:33.362826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.976 [2024-07-13 01:00:33.362834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.976 [2024-07-13 01:00:33.362995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.976 [2024-07-13 01:00:33.363158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.976 [2024-07-13 01:00:33.363168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.976 [2024-07-13 01:00:33.363178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.976 [2024-07-13 01:00:33.365861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.976 [2024-07-13 01:00:33.375193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.976 [2024-07-13 01:00:33.375625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.976 [2024-07-13 01:00:33.375642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.976 [2024-07-13 01:00:33.375649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.976 [2024-07-13 01:00:33.375811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.976 [2024-07-13 01:00:33.375975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.976 [2024-07-13 01:00:33.375984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.976 [2024-07-13 01:00:33.375990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.976 [2024-07-13 01:00:33.378782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.976 [2024-07-13 01:00:33.388077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.976 [2024-07-13 01:00:33.388500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.976 [2024-07-13 01:00:33.388517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.976 [2024-07-13 01:00:33.388524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.976 [2024-07-13 01:00:33.388686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.976 [2024-07-13 01:00:33.388849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.976 [2024-07-13 01:00:33.388858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.976 [2024-07-13 01:00:33.388864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.976 [2024-07-13 01:00:33.391551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.976 [2024-07-13 01:00:33.400970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.976 [2024-07-13 01:00:33.401393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.976 [2024-07-13 01:00:33.401436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.976 [2024-07-13 01:00:33.401458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.976 [2024-07-13 01:00:33.402035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.976 [2024-07-13 01:00:33.402461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.976 [2024-07-13 01:00:33.402471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.976 [2024-07-13 01:00:33.402477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.976 [2024-07-13 01:00:33.405185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.976 [2024-07-13 01:00:33.413847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.976 [2024-07-13 01:00:33.414263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.976 [2024-07-13 01:00:33.414305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.976 [2024-07-13 01:00:33.414328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.976 [2024-07-13 01:00:33.414908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.976 [2024-07-13 01:00:33.415096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.976 [2024-07-13 01:00:33.415105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.976 [2024-07-13 01:00:33.415111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.976 [2024-07-13 01:00:33.417800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.976 [2024-07-13 01:00:33.426754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.976 [2024-07-13 01:00:33.427161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.976 [2024-07-13 01:00:33.427177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.976 [2024-07-13 01:00:33.427184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.976 [2024-07-13 01:00:33.427373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.976 [2024-07-13 01:00:33.427547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.976 [2024-07-13 01:00:33.427556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.976 [2024-07-13 01:00:33.427563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.976 [2024-07-13 01:00:33.430215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.976 [2024-07-13 01:00:33.439579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.976 [2024-07-13 01:00:33.439990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.976 [2024-07-13 01:00:33.440007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.976 [2024-07-13 01:00:33.440014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.976 [2024-07-13 01:00:33.440187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.976 [2024-07-13 01:00:33.440365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.976 [2024-07-13 01:00:33.440374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.977 [2024-07-13 01:00:33.440380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.977 [2024-07-13 01:00:33.443118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.977 [2024-07-13 01:00:33.452544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.977 [2024-07-13 01:00:33.452863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.977 [2024-07-13 01:00:33.452880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.977 [2024-07-13 01:00:33.452887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.977 [2024-07-13 01:00:33.453050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.977 [2024-07-13 01:00:33.453217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.977 [2024-07-13 01:00:33.453231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.977 [2024-07-13 01:00:33.453238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.977 [2024-07-13 01:00:33.455919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.977 [2024-07-13 01:00:33.465443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.977 [2024-07-13 01:00:33.465808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.977 [2024-07-13 01:00:33.465826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.977 [2024-07-13 01:00:33.465833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.977 [2024-07-13 01:00:33.466005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.977 [2024-07-13 01:00:33.466179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.977 [2024-07-13 01:00:33.466189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.977 [2024-07-13 01:00:33.466195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.977 [2024-07-13 01:00:33.468813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.977 [2024-07-13 01:00:33.478339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.977 [2024-07-13 01:00:33.478700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.977 [2024-07-13 01:00:33.478716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.977 [2024-07-13 01:00:33.478723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.977 [2024-07-13 01:00:33.478911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.977 [2024-07-13 01:00:33.479086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.977 [2024-07-13 01:00:33.479095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.977 [2024-07-13 01:00:33.479102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.977 [2024-07-13 01:00:33.481799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.977 [2024-07-13 01:00:33.491219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.977 [2024-07-13 01:00:33.491564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.977 [2024-07-13 01:00:33.491580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.977 [2024-07-13 01:00:33.491589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.977 [2024-07-13 01:00:33.491751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.977 [2024-07-13 01:00:33.491915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.977 [2024-07-13 01:00:33.491924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.977 [2024-07-13 01:00:33.491930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.977 [2024-07-13 01:00:33.494719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.977 [2024-07-13 01:00:33.504437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.977 [2024-07-13 01:00:33.504767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.977 [2024-07-13 01:00:33.504784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.977 [2024-07-13 01:00:33.504791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.977 [2024-07-13 01:00:33.504967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.977 [2024-07-13 01:00:33.505146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.977 [2024-07-13 01:00:33.505156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.977 [2024-07-13 01:00:33.505163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.977 [2024-07-13 01:00:33.507993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.977 [2024-07-13 01:00:33.517386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.977 [2024-07-13 01:00:33.517809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.977 [2024-07-13 01:00:33.517825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.977 [2024-07-13 01:00:33.517832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.977 [2024-07-13 01:00:33.517994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.977 [2024-07-13 01:00:33.518158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.977 [2024-07-13 01:00:33.518167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.977 [2024-07-13 01:00:33.518173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.977 [2024-07-13 01:00:33.520859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.977 [2024-07-13 01:00:33.530239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.977 [2024-07-13 01:00:33.530636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.977 [2024-07-13 01:00:33.530653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:21.977 [2024-07-13 01:00:33.530661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:21.977 [2024-07-13 01:00:33.530823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:21.977 [2024-07-13 01:00:33.530986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.977 [2024-07-13 01:00:33.530995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.977 [2024-07-13 01:00:33.531001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.237 [2024-07-13 01:00:33.533692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.237 [2024-07-13 01:00:33.543061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.237 [2024-07-13 01:00:33.543470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.237 [2024-07-13 01:00:33.543486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.237 [2024-07-13 01:00:33.543496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.237 [2024-07-13 01:00:33.543659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.237 [2024-07-13 01:00:33.543822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.237 [2024-07-13 01:00:33.543831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.237 [2024-07-13 01:00:33.543837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.237 [2024-07-13 01:00:33.546527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.237 [2024-07-13 01:00:33.555961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.237 [2024-07-13 01:00:33.556405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.237 [2024-07-13 01:00:33.556449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.237 [2024-07-13 01:00:33.556472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.237 [2024-07-13 01:00:33.557053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.237 [2024-07-13 01:00:33.557646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.237 [2024-07-13 01:00:33.557673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.237 [2024-07-13 01:00:33.557694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.237 [2024-07-13 01:00:33.560439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.237 [2024-07-13 01:00:33.568831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.237 [2024-07-13 01:00:33.569250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.237 [2024-07-13 01:00:33.569294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.237 [2024-07-13 01:00:33.569316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.237 [2024-07-13 01:00:33.569757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.237 [2024-07-13 01:00:33.569930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.237 [2024-07-13 01:00:33.569939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.237 [2024-07-13 01:00:33.569946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.237 [2024-07-13 01:00:33.572724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.237 [2024-07-13 01:00:33.581637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.237 [2024-07-13 01:00:33.582040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.237 [2024-07-13 01:00:33.582057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.237 [2024-07-13 01:00:33.582064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.237 [2024-07-13 01:00:33.582232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.237 [2024-07-13 01:00:33.582418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.237 [2024-07-13 01:00:33.582430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.237 [2024-07-13 01:00:33.582437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.237 [2024-07-13 01:00:33.585097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.237 [2024-07-13 01:00:33.594612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.237 [2024-07-13 01:00:33.595032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.237 [2024-07-13 01:00:33.595048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.237 [2024-07-13 01:00:33.595055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.237 [2024-07-13 01:00:33.595217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.237 [2024-07-13 01:00:33.595407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.237 [2024-07-13 01:00:33.595417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.237 [2024-07-13 01:00:33.595424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.237 [2024-07-13 01:00:33.598083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.237 [2024-07-13 01:00:33.607525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.237 [2024-07-13 01:00:33.607890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.237 [2024-07-13 01:00:33.607907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.237 [2024-07-13 01:00:33.607914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.238 [2024-07-13 01:00:33.608086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.238 [2024-07-13 01:00:33.608263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.238 [2024-07-13 01:00:33.608273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.238 [2024-07-13 01:00:33.608280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.238 [2024-07-13 01:00:33.610893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.238 [2024-07-13 01:00:33.620516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.238 [2024-07-13 01:00:33.620923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.238 [2024-07-13 01:00:33.620966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.238 [2024-07-13 01:00:33.620988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.238 [2024-07-13 01:00:33.621577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.238 [2024-07-13 01:00:33.621984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.238 [2024-07-13 01:00:33.621993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.238 [2024-07-13 01:00:33.621999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.238 [2024-07-13 01:00:33.624688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.238 [2024-07-13 01:00:33.633495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.238 [2024-07-13 01:00:33.633929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.238 [2024-07-13 01:00:33.633983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.238 [2024-07-13 01:00:33.634007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.238 [2024-07-13 01:00:33.634560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.238 [2024-07-13 01:00:33.634725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.238 [2024-07-13 01:00:33.634734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.238 [2024-07-13 01:00:33.634740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.238 [2024-07-13 01:00:33.637427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.238 [2024-07-13 01:00:33.646547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.238 [2024-07-13 01:00:33.646971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.238 [2024-07-13 01:00:33.646987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.238 [2024-07-13 01:00:33.646994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.238 [2024-07-13 01:00:33.647156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.238 [2024-07-13 01:00:33.647323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.238 [2024-07-13 01:00:33.647333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.238 [2024-07-13 01:00:33.647339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.238 [2024-07-13 01:00:33.650045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.238 [2024-07-13 01:00:33.659532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.238 [2024-07-13 01:00:33.659856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.238 [2024-07-13 01:00:33.659873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.238 [2024-07-13 01:00:33.659879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.238 [2024-07-13 01:00:33.660041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.238 [2024-07-13 01:00:33.660204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.238 [2024-07-13 01:00:33.660213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.238 [2024-07-13 01:00:33.660220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.238 [2024-07-13 01:00:33.662960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.238 [2024-07-13 01:00:33.672570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.238 [2024-07-13 01:00:33.672997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.238 [2024-07-13 01:00:33.673041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.238 [2024-07-13 01:00:33.673064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.238 [2024-07-13 01:00:33.673664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.238 [2024-07-13 01:00:33.674128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.238 [2024-07-13 01:00:33.674137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.238 [2024-07-13 01:00:33.674143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.238 [2024-07-13 01:00:33.676889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.238 [2024-07-13 01:00:33.685618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.238 [2024-07-13 01:00:33.686025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.238 [2024-07-13 01:00:33.686042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.238 [2024-07-13 01:00:33.686050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.238 [2024-07-13 01:00:33.686213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.238 [2024-07-13 01:00:33.686383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.238 [2024-07-13 01:00:33.686393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.238 [2024-07-13 01:00:33.686400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.238 [2024-07-13 01:00:33.689050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.238 [2024-07-13 01:00:33.698621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.238 [2024-07-13 01:00:33.698943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.238 [2024-07-13 01:00:33.698959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.238 [2024-07-13 01:00:33.698967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.238 [2024-07-13 01:00:33.699147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.238 [2024-07-13 01:00:33.699325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.238 [2024-07-13 01:00:33.699335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.238 [2024-07-13 01:00:33.699342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.238 [2024-07-13 01:00:33.702063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.238 [2024-07-13 01:00:33.711748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.238 [2024-07-13 01:00:33.712128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.238 [2024-07-13 01:00:33.712146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.238 [2024-07-13 01:00:33.712154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.238 [2024-07-13 01:00:33.712335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.238 [2024-07-13 01:00:33.712514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.238 [2024-07-13 01:00:33.712524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.238 [2024-07-13 01:00:33.712537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.238 [2024-07-13 01:00:33.715347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.238 [2024-07-13 01:00:33.724588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.238 [2024-07-13 01:00:33.724952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.238 [2024-07-13 01:00:33.724969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.238 [2024-07-13 01:00:33.724976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.238 [2024-07-13 01:00:33.725148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.238 [2024-07-13 01:00:33.725325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.238 [2024-07-13 01:00:33.725336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.238 [2024-07-13 01:00:33.725342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.238 [2024-07-13 01:00:33.728046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.238 [2024-07-13 01:00:33.737526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.238 [2024-07-13 01:00:33.737891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.238 [2024-07-13 01:00:33.737907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.238 [2024-07-13 01:00:33.737914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.238 [2024-07-13 01:00:33.738077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.238 [2024-07-13 01:00:33.738246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.238 [2024-07-13 01:00:33.738256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.238 [2024-07-13 01:00:33.738262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.238 [2024-07-13 01:00:33.740903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.238 [2024-07-13 01:00:33.750549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.238 [2024-07-13 01:00:33.750911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.238 [2024-07-13 01:00:33.750928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.238 [2024-07-13 01:00:33.750935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.238 [2024-07-13 01:00:33.751113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.238 [2024-07-13 01:00:33.751296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.238 [2024-07-13 01:00:33.751307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.238 [2024-07-13 01:00:33.751314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.238 [2024-07-13 01:00:33.754158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.238 [2024-07-13 01:00:33.763723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.238 [2024-07-13 01:00:33.764140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.238 [2024-07-13 01:00:33.764157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.238 [2024-07-13 01:00:33.764165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.238 [2024-07-13 01:00:33.764347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.238 [2024-07-13 01:00:33.764526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.238 [2024-07-13 01:00:33.764536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.238 [2024-07-13 01:00:33.764543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.238 [2024-07-13 01:00:33.767369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.238 [2024-07-13 01:00:33.776890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.239 [2024-07-13 01:00:33.777322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.239 [2024-07-13 01:00:33.777340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.239 [2024-07-13 01:00:33.777347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.239 [2024-07-13 01:00:33.777526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.239 [2024-07-13 01:00:33.777705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.239 [2024-07-13 01:00:33.777715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.239 [2024-07-13 01:00:33.777722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.239 [2024-07-13 01:00:33.780550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.239 [2024-07-13 01:00:33.790072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.239 [2024-07-13 01:00:33.790435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.239 [2024-07-13 01:00:33.790453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.239 [2024-07-13 01:00:33.790461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.239 [2024-07-13 01:00:33.790637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.239 [2024-07-13 01:00:33.790815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.239 [2024-07-13 01:00:33.790826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.239 [2024-07-13 01:00:33.790833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.239 [2024-07-13 01:00:33.793664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.498 [2024-07-13 01:00:33.803183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.498 [2024-07-13 01:00:33.803626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.498 [2024-07-13 01:00:33.803644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.498 [2024-07-13 01:00:33.803652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.498 [2024-07-13 01:00:33.803833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.498 [2024-07-13 01:00:33.804011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.498 [2024-07-13 01:00:33.804020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.498 [2024-07-13 01:00:33.804027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.498 [2024-07-13 01:00:33.806858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.498 [2024-07-13 01:00:33.816381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.498 [2024-07-13 01:00:33.816798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.498 [2024-07-13 01:00:33.816815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.498 [2024-07-13 01:00:33.816823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.498 [2024-07-13 01:00:33.817000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.498 [2024-07-13 01:00:33.817179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.498 [2024-07-13 01:00:33.817188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.498 [2024-07-13 01:00:33.817195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.498 [2024-07-13 01:00:33.820026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.499 [2024-07-13 01:00:33.829546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.499 [2024-07-13 01:00:33.829979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.499 [2024-07-13 01:00:33.829996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.499 [2024-07-13 01:00:33.830004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.499 [2024-07-13 01:00:33.830180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.499 [2024-07-13 01:00:33.830365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.499 [2024-07-13 01:00:33.830375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.499 [2024-07-13 01:00:33.830382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.499 [2024-07-13 01:00:33.833207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.499 [2024-07-13 01:00:33.842726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.499 [2024-07-13 01:00:33.843145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.499 [2024-07-13 01:00:33.843163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.499 [2024-07-13 01:00:33.843170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.499 [2024-07-13 01:00:33.843350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.499 [2024-07-13 01:00:33.843529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.499 [2024-07-13 01:00:33.843538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.499 [2024-07-13 01:00:33.843545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.499 [2024-07-13 01:00:33.846376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.499 [2024-07-13 01:00:33.855902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.499 [2024-07-13 01:00:33.856339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.499 [2024-07-13 01:00:33.856357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.499 [2024-07-13 01:00:33.856365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.499 [2024-07-13 01:00:33.856542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.499 [2024-07-13 01:00:33.856722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.499 [2024-07-13 01:00:33.856732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.499 [2024-07-13 01:00:33.856740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.499 [2024-07-13 01:00:33.859573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.499 [2024-07-13 01:00:33.869096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.499 [2024-07-13 01:00:33.869455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.499 [2024-07-13 01:00:33.869473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.499 [2024-07-13 01:00:33.869480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.499 [2024-07-13 01:00:33.869657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.499 [2024-07-13 01:00:33.869837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.499 [2024-07-13 01:00:33.869847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.499 [2024-07-13 01:00:33.869853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.499 [2024-07-13 01:00:33.872687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.499 [2024-07-13 01:00:33.882218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.499 [2024-07-13 01:00:33.882656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.499 [2024-07-13 01:00:33.882674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.499 [2024-07-13 01:00:33.882681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.499 [2024-07-13 01:00:33.882857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.499 [2024-07-13 01:00:33.883035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.499 [2024-07-13 01:00:33.883045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.499 [2024-07-13 01:00:33.883052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.499 [2024-07-13 01:00:33.885883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.499 [2024-07-13 01:00:33.895404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.499 [2024-07-13 01:00:33.895790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.499 [2024-07-13 01:00:33.895808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.499 [2024-07-13 01:00:33.895819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.499 [2024-07-13 01:00:33.895997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.499 [2024-07-13 01:00:33.896177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.499 [2024-07-13 01:00:33.896186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.499 [2024-07-13 01:00:33.896193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.499 [2024-07-13 01:00:33.899020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.499 [2024-07-13 01:00:33.908537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.499 [2024-07-13 01:00:33.908884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.499 [2024-07-13 01:00:33.908901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.499 [2024-07-13 01:00:33.908908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.499 [2024-07-13 01:00:33.909085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.499 [2024-07-13 01:00:33.909268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.499 [2024-07-13 01:00:33.909279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.499 [2024-07-13 01:00:33.909285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.499 [2024-07-13 01:00:33.912113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.499 [2024-07-13 01:00:33.921639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.499 [2024-07-13 01:00:33.922054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.499 [2024-07-13 01:00:33.922072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.499 [2024-07-13 01:00:33.922080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.499 [2024-07-13 01:00:33.922260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.499 [2024-07-13 01:00:33.922438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.499 [2024-07-13 01:00:33.922448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.499 [2024-07-13 01:00:33.922454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.499 [2024-07-13 01:00:33.925288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.499 [2024-07-13 01:00:33.934805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.499 [2024-07-13 01:00:33.935242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.499 [2024-07-13 01:00:33.935260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.499 [2024-07-13 01:00:33.935268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.499 [2024-07-13 01:00:33.935444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.499 [2024-07-13 01:00:33.935626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.499 [2024-07-13 01:00:33.935637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.499 [2024-07-13 01:00:33.935643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.499 [2024-07-13 01:00:33.938475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.499 [2024-07-13 01:00:33.947995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.499 [2024-07-13 01:00:33.948429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.499 [2024-07-13 01:00:33.948447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.499 [2024-07-13 01:00:33.948455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.499 [2024-07-13 01:00:33.948631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.499 [2024-07-13 01:00:33.948810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.499 [2024-07-13 01:00:33.948820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.499 [2024-07-13 01:00:33.948827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.499 [2024-07-13 01:00:33.951657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.499 [2024-07-13 01:00:33.961176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.499 [2024-07-13 01:00:33.961599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.499 [2024-07-13 01:00:33.961616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.499 [2024-07-13 01:00:33.961623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.499 [2024-07-13 01:00:33.961801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.499 [2024-07-13 01:00:33.961979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.499 [2024-07-13 01:00:33.961989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.499 [2024-07-13 01:00:33.961996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.499 [2024-07-13 01:00:33.964829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.499 [2024-07-13 01:00:33.974357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.499 [2024-07-13 01:00:33.974787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.499 [2024-07-13 01:00:33.974803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.499 [2024-07-13 01:00:33.974811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.499 [2024-07-13 01:00:33.974989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.499 [2024-07-13 01:00:33.975167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.499 [2024-07-13 01:00:33.975178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.500 [2024-07-13 01:00:33.975184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.500 [2024-07-13 01:00:33.978020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.500 [2024-07-13 01:00:33.987543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.500 [2024-07-13 01:00:33.987974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.500 [2024-07-13 01:00:33.987991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.500 [2024-07-13 01:00:33.987999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.500 [2024-07-13 01:00:33.988176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.500 [2024-07-13 01:00:33.988358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.500 [2024-07-13 01:00:33.988368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.500 [2024-07-13 01:00:33.988375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.500 [2024-07-13 01:00:33.991203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.500 [2024-07-13 01:00:34.000723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.500 [2024-07-13 01:00:34.001180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.500 [2024-07-13 01:00:34.001197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.500 [2024-07-13 01:00:34.001206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.500 [2024-07-13 01:00:34.001389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.500 [2024-07-13 01:00:34.001568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.500 [2024-07-13 01:00:34.001577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.500 [2024-07-13 01:00:34.001584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.500 [2024-07-13 01:00:34.004416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.500 [2024-07-13 01:00:34.013833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.500 [2024-07-13 01:00:34.014166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.500 [2024-07-13 01:00:34.014184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.500 [2024-07-13 01:00:34.014191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.500 [2024-07-13 01:00:34.014372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.500 [2024-07-13 01:00:34.014550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.500 [2024-07-13 01:00:34.014560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.500 [2024-07-13 01:00:34.014566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.500 [2024-07-13 01:00:34.017398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.500 [2024-07-13 01:00:34.026919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.500 [2024-07-13 01:00:34.027360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.500 [2024-07-13 01:00:34.027377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.500 [2024-07-13 01:00:34.027387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.500 [2024-07-13 01:00:34.027565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.500 [2024-07-13 01:00:34.027742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.500 [2024-07-13 01:00:34.027752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.500 [2024-07-13 01:00:34.027758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.500 [2024-07-13 01:00:34.030590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.500 [2024-07-13 01:00:34.040106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.500 [2024-07-13 01:00:34.040565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.500 [2024-07-13 01:00:34.040582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.500 [2024-07-13 01:00:34.040589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.500 [2024-07-13 01:00:34.040766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.500 [2024-07-13 01:00:34.040945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.500 [2024-07-13 01:00:34.040955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.500 [2024-07-13 01:00:34.040961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.500 [2024-07-13 01:00:34.043790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.500 [2024-07-13 01:00:34.053309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.500 [2024-07-13 01:00:34.053725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.500 [2024-07-13 01:00:34.053742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.500 [2024-07-13 01:00:34.053750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.500 [2024-07-13 01:00:34.053927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.500 [2024-07-13 01:00:34.054106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.500 [2024-07-13 01:00:34.054115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.500 [2024-07-13 01:00:34.054122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.759 [2024-07-13 01:00:34.056953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.759 [2024-07-13 01:00:34.066478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.759 [2024-07-13 01:00:34.066891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.759 [2024-07-13 01:00:34.066908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.759 [2024-07-13 01:00:34.066915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.759 [2024-07-13 01:00:34.067093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.759 [2024-07-13 01:00:34.067275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.759 [2024-07-13 01:00:34.067286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.759 [2024-07-13 01:00:34.067297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.759 [2024-07-13 01:00:34.070122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.759 [2024-07-13 01:00:34.079654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.759 [2024-07-13 01:00:34.080017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.759 [2024-07-13 01:00:34.080035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.759 [2024-07-13 01:00:34.080042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.759 [2024-07-13 01:00:34.080219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.759 [2024-07-13 01:00:34.080402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.759 [2024-07-13 01:00:34.080412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.759 [2024-07-13 01:00:34.080419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.759 [2024-07-13 01:00:34.083248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.759 [2024-07-13 01:00:34.092764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.759 [2024-07-13 01:00:34.093121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.759 [2024-07-13 01:00:34.093139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.759 [2024-07-13 01:00:34.093146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.759 [2024-07-13 01:00:34.093328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.759 [2024-07-13 01:00:34.093507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.759 [2024-07-13 01:00:34.093516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.759 [2024-07-13 01:00:34.093523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.759 [2024-07-13 01:00:34.096349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.759 [2024-07-13 01:00:34.105868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.759 [2024-07-13 01:00:34.106230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.759 [2024-07-13 01:00:34.106247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.759 [2024-07-13 01:00:34.106256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.759 [2024-07-13 01:00:34.106432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.759 [2024-07-13 01:00:34.106610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.759 [2024-07-13 01:00:34.106619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.759 [2024-07-13 01:00:34.106626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.759 [2024-07-13 01:00:34.109458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.759 [2024-07-13 01:00:34.118878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.759 [2024-07-13 01:00:34.119234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.759 [2024-07-13 01:00:34.119250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.759 [2024-07-13 01:00:34.119257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.759 [2024-07-13 01:00:34.119420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.759 [2024-07-13 01:00:34.119584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.759 [2024-07-13 01:00:34.119593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.759 [2024-07-13 01:00:34.119599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.759 [2024-07-13 01:00:34.122286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.759 [2024-07-13 01:00:34.131702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.759 [2024-07-13 01:00:34.132122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.759 [2024-07-13 01:00:34.132138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.759 [2024-07-13 01:00:34.132145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.759 [2024-07-13 01:00:34.132333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.759 [2024-07-13 01:00:34.132507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.759 [2024-07-13 01:00:34.132516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.759 [2024-07-13 01:00:34.132523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.759 [2024-07-13 01:00:34.135176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.759 [2024-07-13 01:00:34.144604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.759 [2024-07-13 01:00:34.144954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.759 [2024-07-13 01:00:34.144970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.759 [2024-07-13 01:00:34.144976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.759 [2024-07-13 01:00:34.145139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.759 [2024-07-13 01:00:34.145329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.759 [2024-07-13 01:00:34.145339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.759 [2024-07-13 01:00:34.145346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.759 [2024-07-13 01:00:34.148008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.759 [2024-07-13 01:00:34.157443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.759 [2024-07-13 01:00:34.157884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.759 [2024-07-13 01:00:34.157928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.759 [2024-07-13 01:00:34.157950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.759 [2024-07-13 01:00:34.158389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.759 [2024-07-13 01:00:34.158563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.759 [2024-07-13 01:00:34.158573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.759 [2024-07-13 01:00:34.158579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.759 [2024-07-13 01:00:34.161234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.759 [2024-07-13 01:00:34.170296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.759 [2024-07-13 01:00:34.170640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.759 [2024-07-13 01:00:34.170657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.760 [2024-07-13 01:00:34.170664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.760 [2024-07-13 01:00:34.170828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.760 [2024-07-13 01:00:34.170992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.760 [2024-07-13 01:00:34.171000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.760 [2024-07-13 01:00:34.171007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.760 [2024-07-13 01:00:34.173751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.760 [2024-07-13 01:00:34.183086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.760 [2024-07-13 01:00:34.183506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.760 [2024-07-13 01:00:34.183523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.760 [2024-07-13 01:00:34.183530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.760 [2024-07-13 01:00:34.183693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.760 [2024-07-13 01:00:34.183856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.760 [2024-07-13 01:00:34.183866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.760 [2024-07-13 01:00:34.183872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.760 [2024-07-13 01:00:34.186561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.760 [2024-07-13 01:00:34.195891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.760 [2024-07-13 01:00:34.196330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.760 [2024-07-13 01:00:34.196375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.760 [2024-07-13 01:00:34.196397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.760 [2024-07-13 01:00:34.196898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.760 [2024-07-13 01:00:34.197062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.760 [2024-07-13 01:00:34.197071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.760 [2024-07-13 01:00:34.197077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.760 [2024-07-13 01:00:34.199820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.760 [2024-07-13 01:00:34.208730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.760 [2024-07-13 01:00:34.209173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.760 [2024-07-13 01:00:34.209217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.760 [2024-07-13 01:00:34.209254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.760 [2024-07-13 01:00:34.209834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.760 [2024-07-13 01:00:34.210259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.760 [2024-07-13 01:00:34.210275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.760 [2024-07-13 01:00:34.210281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.760 [2024-07-13 01:00:34.212892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.760 [2024-07-13 01:00:34.221615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.760 [2024-07-13 01:00:34.221970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.760 [2024-07-13 01:00:34.222012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.760 [2024-07-13 01:00:34.222035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.760 [2024-07-13 01:00:34.222628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.760 [2024-07-13 01:00:34.223194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.760 [2024-07-13 01:00:34.223204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.760 [2024-07-13 01:00:34.223211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.760 [2024-07-13 01:00:34.225917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.760 [2024-07-13 01:00:34.234482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.760 [2024-07-13 01:00:34.234912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.760 [2024-07-13 01:00:34.234955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.760 [2024-07-13 01:00:34.234978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.760 [2024-07-13 01:00:34.235572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.760 [2024-07-13 01:00:34.236153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.760 [2024-07-13 01:00:34.236178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.760 [2024-07-13 01:00:34.236200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.760 [2024-07-13 01:00:34.242479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.760 [2024-07-13 01:00:34.249539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.760 [2024-07-13 01:00:34.249992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.760 [2024-07-13 01:00:34.250016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.760 [2024-07-13 01:00:34.250026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.760 [2024-07-13 01:00:34.250287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.760 [2024-07-13 01:00:34.250544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.760 [2024-07-13 01:00:34.250557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.760 [2024-07-13 01:00:34.250566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.760 [2024-07-13 01:00:34.254621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.760 [2024-07-13 01:00:34.262791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.760 [2024-07-13 01:00:34.263236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.760 [2024-07-13 01:00:34.263255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.760 [2024-07-13 01:00:34.263263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.760 [2024-07-13 01:00:34.263441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.760 [2024-07-13 01:00:34.263620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.760 [2024-07-13 01:00:34.263630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.760 [2024-07-13 01:00:34.263637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.760 [2024-07-13 01:00:34.266414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.760 [2024-07-13 01:00:34.275692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.760 [2024-07-13 01:00:34.276089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.760 [2024-07-13 01:00:34.276105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.760 [2024-07-13 01:00:34.276112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.760 [2024-07-13 01:00:34.276298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.760 [2024-07-13 01:00:34.276470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.760 [2024-07-13 01:00:34.276480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.761 [2024-07-13 01:00:34.276486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.761 [2024-07-13 01:00:34.279153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.761 [2024-07-13 01:00:34.288609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.761 [2024-07-13 01:00:34.288960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.761 [2024-07-13 01:00:34.288976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.761 [2024-07-13 01:00:34.288984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.761 [2024-07-13 01:00:34.289146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.761 [2024-07-13 01:00:34.289337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.761 [2024-07-13 01:00:34.289347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.761 [2024-07-13 01:00:34.289353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.761 [2024-07-13 01:00:34.292017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.761 [2024-07-13 01:00:34.301446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.761 [2024-07-13 01:00:34.301877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.761 [2024-07-13 01:00:34.301920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.761 [2024-07-13 01:00:34.301942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.761 [2024-07-13 01:00:34.302434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.761 [2024-07-13 01:00:34.302609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.761 [2024-07-13 01:00:34.302619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.761 [2024-07-13 01:00:34.302626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.761 [2024-07-13 01:00:34.305396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.761 [2024-07-13 01:00:34.314360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:22.761 [2024-07-13 01:00:34.314700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.761 [2024-07-13 01:00:34.314717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:22.761 [2024-07-13 01:00:34.314724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:22.761 [2024-07-13 01:00:34.314887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:22.761 [2024-07-13 01:00:34.315050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.761 [2024-07-13 01:00:34.315059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.761 [2024-07-13 01:00:34.315065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.020 [2024-07-13 01:00:34.317806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.020 [2024-07-13 01:00:34.327211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.020 [2024-07-13 01:00:34.327621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.020 [2024-07-13 01:00:34.327637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.020 [2024-07-13 01:00:34.327644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.020 [2024-07-13 01:00:34.327807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.020 [2024-07-13 01:00:34.327971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.020 [2024-07-13 01:00:34.327979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.020 [2024-07-13 01:00:34.327986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.020 [2024-07-13 01:00:34.330672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.020 [2024-07-13 01:00:34.339995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.020 [2024-07-13 01:00:34.340404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.020 [2024-07-13 01:00:34.340448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.020 [2024-07-13 01:00:34.340471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.020 [2024-07-13 01:00:34.341051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.020 [2024-07-13 01:00:34.341605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.020 [2024-07-13 01:00:34.341616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.020 [2024-07-13 01:00:34.341622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.020 [2024-07-13 01:00:34.344272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.020 [2024-07-13 01:00:34.352776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.020 [2024-07-13 01:00:34.353204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.020 [2024-07-13 01:00:34.353261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.020 [2024-07-13 01:00:34.353285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.020 [2024-07-13 01:00:34.353863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.020 [2024-07-13 01:00:34.354410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.020 [2024-07-13 01:00:34.354421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.020 [2024-07-13 01:00:34.354427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.020 [2024-07-13 01:00:34.357086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.020 [2024-07-13 01:00:34.365618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.020 [2024-07-13 01:00:34.366042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.020 [2024-07-13 01:00:34.366059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.020 [2024-07-13 01:00:34.366066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.020 [2024-07-13 01:00:34.366233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.020 [2024-07-13 01:00:34.366397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.020 [2024-07-13 01:00:34.366406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.020 [2024-07-13 01:00:34.366413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.020 [2024-07-13 01:00:34.369123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.020 [2024-07-13 01:00:34.378488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.020 [2024-07-13 01:00:34.378883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.020 [2024-07-13 01:00:34.378899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.020 [2024-07-13 01:00:34.378912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.020 [2024-07-13 01:00:34.379076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.020 [2024-07-13 01:00:34.379245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.020 [2024-07-13 01:00:34.379271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.020 [2024-07-13 01:00:34.379278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.020 [2024-07-13 01:00:34.381976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.020 [2024-07-13 01:00:34.391402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.020 [2024-07-13 01:00:34.391816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.020 [2024-07-13 01:00:34.391859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.020 [2024-07-13 01:00:34.391881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.020 [2024-07-13 01:00:34.392393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.020 [2024-07-13 01:00:34.392568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.020 [2024-07-13 01:00:34.392577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.020 [2024-07-13 01:00:34.392584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.020 [2024-07-13 01:00:34.395238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.020 [2024-07-13 01:00:34.404204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.020 [2024-07-13 01:00:34.404556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.020 [2024-07-13 01:00:34.404573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.020 [2024-07-13 01:00:34.404579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.020 [2024-07-13 01:00:34.404742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.020 [2024-07-13 01:00:34.404905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.020 [2024-07-13 01:00:34.404914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.020 [2024-07-13 01:00:34.404920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.020 [2024-07-13 01:00:34.407607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.020 [2024-07-13 01:00:34.417000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.020 [2024-07-13 01:00:34.417399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.020 [2024-07-13 01:00:34.417416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.020 [2024-07-13 01:00:34.417423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.020 [2024-07-13 01:00:34.417586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.020 [2024-07-13 01:00:34.417749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.020 [2024-07-13 01:00:34.417759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.020 [2024-07-13 01:00:34.417769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.020 [2024-07-13 01:00:34.420458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.020 [2024-07-13 01:00:34.429887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.020 [2024-07-13 01:00:34.430333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.020 [2024-07-13 01:00:34.430375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.020 [2024-07-13 01:00:34.430399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.020 [2024-07-13 01:00:34.430977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.020 [2024-07-13 01:00:34.431573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.020 [2024-07-13 01:00:34.431599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.020 [2024-07-13 01:00:34.431619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.020 [2024-07-13 01:00:34.434318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.020 [2024-07-13 01:00:34.442670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.020 [2024-07-13 01:00:34.443089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.020 [2024-07-13 01:00:34.443105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.020 [2024-07-13 01:00:34.443112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.021 [2024-07-13 01:00:34.443299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.021 [2024-07-13 01:00:34.443472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.021 [2024-07-13 01:00:34.443482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.021 [2024-07-13 01:00:34.443489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.021 [2024-07-13 01:00:34.446142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.021 [2024-07-13 01:00:34.455734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.021 [2024-07-13 01:00:34.456155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.021 [2024-07-13 01:00:34.456173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.021 [2024-07-13 01:00:34.456180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.021 [2024-07-13 01:00:34.456359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.021 [2024-07-13 01:00:34.456534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.021 [2024-07-13 01:00:34.456544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.021 [2024-07-13 01:00:34.456550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.021 [2024-07-13 01:00:34.459293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.021 [2024-07-13 01:00:34.468588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.021 [2024-07-13 01:00:34.469017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.021 [2024-07-13 01:00:34.469060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.021 [2024-07-13 01:00:34.469083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.021 [2024-07-13 01:00:34.469567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.021 [2024-07-13 01:00:34.469741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.021 [2024-07-13 01:00:34.469749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.021 [2024-07-13 01:00:34.469756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.021 [2024-07-13 01:00:34.472398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.021 [2024-07-13 01:00:34.481615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.021 [2024-07-13 01:00:34.482070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.021 [2024-07-13 01:00:34.482114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.021 [2024-07-13 01:00:34.482137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.021 [2024-07-13 01:00:34.482641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.021 [2024-07-13 01:00:34.482807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.021 [2024-07-13 01:00:34.482816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.021 [2024-07-13 01:00:34.482823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.021 [2024-07-13 01:00:34.485448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.021 [2024-07-13 01:00:34.494591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.021 [2024-07-13 01:00:34.494937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.021 [2024-07-13 01:00:34.494954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.021 [2024-07-13 01:00:34.494961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.021 [2024-07-13 01:00:34.495123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.021 [2024-07-13 01:00:34.495292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.021 [2024-07-13 01:00:34.495301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.021 [2024-07-13 01:00:34.495307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.021 [2024-07-13 01:00:34.497950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.021 [2024-07-13 01:00:34.507517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.021 [2024-07-13 01:00:34.507859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.021 [2024-07-13 01:00:34.507875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.021 [2024-07-13 01:00:34.507883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.021 [2024-07-13 01:00:34.508050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.021 [2024-07-13 01:00:34.508214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.021 [2024-07-13 01:00:34.508230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.021 [2024-07-13 01:00:34.508237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.021 [2024-07-13 01:00:34.511019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.021 [2024-07-13 01:00:34.520632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.021 [2024-07-13 01:00:34.521027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.021 [2024-07-13 01:00:34.521043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.021 [2024-07-13 01:00:34.521050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.021 [2024-07-13 01:00:34.521212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.021 [2024-07-13 01:00:34.521382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.021 [2024-07-13 01:00:34.521392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.021 [2024-07-13 01:00:34.521398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.021 [2024-07-13 01:00:34.524044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.021 [2024-07-13 01:00:34.533524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.021 [2024-07-13 01:00:34.533945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.021 [2024-07-13 01:00:34.533987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.021 [2024-07-13 01:00:34.534009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.021 [2024-07-13 01:00:34.534603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.021 [2024-07-13 01:00:34.535114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.021 [2024-07-13 01:00:34.535124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.021 [2024-07-13 01:00:34.535130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.021 [2024-07-13 01:00:34.537752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.021 [2024-07-13 01:00:34.546410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.021 [2024-07-13 01:00:34.546821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.021 [2024-07-13 01:00:34.546863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.021 [2024-07-13 01:00:34.546887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.021 [2024-07-13 01:00:34.547479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.021 [2024-07-13 01:00:34.547981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.021 [2024-07-13 01:00:34.547991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.021 [2024-07-13 01:00:34.548001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.021 [2024-07-13 01:00:34.550630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.021 [2024-07-13 01:00:34.559288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.021 [2024-07-13 01:00:34.559714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.021 [2024-07-13 01:00:34.559756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.021 [2024-07-13 01:00:34.559778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.021 [2024-07-13 01:00:34.560180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.021 [2024-07-13 01:00:34.560373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.021 [2024-07-13 01:00:34.560388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.021 [2024-07-13 01:00:34.560395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.021 [2024-07-13 01:00:34.563057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.021 [2024-07-13 01:00:34.572177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.021 [2024-07-13 01:00:34.572603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.021 [2024-07-13 01:00:34.572619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.021 [2024-07-13 01:00:34.572626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.021 [2024-07-13 01:00:34.572789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.021 [2024-07-13 01:00:34.572953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.021 [2024-07-13 01:00:34.572962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.021 [2024-07-13 01:00:34.572968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.021 [2024-07-13 01:00:34.575670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.281 [2024-07-13 01:00:34.585059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.281 [2024-07-13 01:00:34.585464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.281 [2024-07-13 01:00:34.585480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.281 [2024-07-13 01:00:34.585487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.281 [2024-07-13 01:00:34.585651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.281 [2024-07-13 01:00:34.585814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.281 [2024-07-13 01:00:34.585823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.281 [2024-07-13 01:00:34.585829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.281 [2024-07-13 01:00:34.588519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.281 [2024-07-13 01:00:34.597951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.281 [2024-07-13 01:00:34.598355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.281 [2024-07-13 01:00:34.598406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.281 [2024-07-13 01:00:34.598428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.281 [2024-07-13 01:00:34.598929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.281 [2024-07-13 01:00:34.599092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.281 [2024-07-13 01:00:34.599100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.281 [2024-07-13 01:00:34.599106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.281 [2024-07-13 01:00:34.604703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.281 [2024-07-13 01:00:34.612965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.281 [2024-07-13 01:00:34.613495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.281 [2024-07-13 01:00:34.613516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.281 [2024-07-13 01:00:34.613526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.281 [2024-07-13 01:00:34.613778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.281 [2024-07-13 01:00:34.614032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.281 [2024-07-13 01:00:34.614045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.281 [2024-07-13 01:00:34.614054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.281 [2024-07-13 01:00:34.618113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.281 [2024-07-13 01:00:34.625883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.281 [2024-07-13 01:00:34.626319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.281 [2024-07-13 01:00:34.626362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.281 [2024-07-13 01:00:34.626385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.281 [2024-07-13 01:00:34.626966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.281 [2024-07-13 01:00:34.627203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.281 [2024-07-13 01:00:34.627214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.281 [2024-07-13 01:00:34.627220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.281 [2024-07-13 01:00:34.629987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.281 [2024-07-13 01:00:34.638799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.281 [2024-07-13 01:00:34.639146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.281 [2024-07-13 01:00:34.639162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.281 [2024-07-13 01:00:34.639168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.281 [2024-07-13 01:00:34.639337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.281 [2024-07-13 01:00:34.639504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.281 [2024-07-13 01:00:34.639514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.282 [2024-07-13 01:00:34.639520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.282 [2024-07-13 01:00:34.642206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.282 [2024-07-13 01:00:34.651626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.282 [2024-07-13 01:00:34.652029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.282 [2024-07-13 01:00:34.652070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.282 [2024-07-13 01:00:34.652092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.282 [2024-07-13 01:00:34.652687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.282 [2024-07-13 01:00:34.653243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.282 [2024-07-13 01:00:34.653268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.282 [2024-07-13 01:00:34.653276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.282 [2024-07-13 01:00:34.655887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.282 [2024-07-13 01:00:34.664466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.282 [2024-07-13 01:00:34.664903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.282 [2024-07-13 01:00:34.664946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.282 [2024-07-13 01:00:34.664969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.282 [2024-07-13 01:00:34.665382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.282 [2024-07-13 01:00:34.665556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.282 [2024-07-13 01:00:34.665565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.282 [2024-07-13 01:00:34.665572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.282 [2024-07-13 01:00:34.668219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.282 [2024-07-13 01:00:34.677393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.282 [2024-07-13 01:00:34.677811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.282 [2024-07-13 01:00:34.677827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.282 [2024-07-13 01:00:34.677834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.282 [2024-07-13 01:00:34.677997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.282 [2024-07-13 01:00:34.678161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.282 [2024-07-13 01:00:34.678170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.282 [2024-07-13 01:00:34.678176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.282 [2024-07-13 01:00:34.680859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.282 [2024-07-13 01:00:34.690290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.282 [2024-07-13 01:00:34.690633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.282 [2024-07-13 01:00:34.690653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.282 [2024-07-13 01:00:34.690660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.282 [2024-07-13 01:00:34.690833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.282 [2024-07-13 01:00:34.691006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.282 [2024-07-13 01:00:34.691016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.282 [2024-07-13 01:00:34.691022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.282 [2024-07-13 01:00:34.693695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.282 [2024-07-13 01:00:34.703313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.282 [2024-07-13 01:00:34.703605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.282 [2024-07-13 01:00:34.703621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.282 [2024-07-13 01:00:34.703629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.282 [2024-07-13 01:00:34.703800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.282 [2024-07-13 01:00:34.703973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.282 [2024-07-13 01:00:34.703984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.282 [2024-07-13 01:00:34.703991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.282 [2024-07-13 01:00:34.706804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.282 [2024-07-13 01:00:34.716502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.282 [2024-07-13 01:00:34.716939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.282 [2024-07-13 01:00:34.716956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.282 [2024-07-13 01:00:34.716964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.282 [2024-07-13 01:00:34.717142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.282 [2024-07-13 01:00:34.717328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.282 [2024-07-13 01:00:34.717338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.282 [2024-07-13 01:00:34.717345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.282 [2024-07-13 01:00:34.720174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.282 [2024-07-13 01:00:34.729573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.282 [2024-07-13 01:00:34.729919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.282 [2024-07-13 01:00:34.729937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.282 [2024-07-13 01:00:34.729949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.282 [2024-07-13 01:00:34.730127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.282 [2024-07-13 01:00:34.730313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.282 [2024-07-13 01:00:34.730325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.282 [2024-07-13 01:00:34.730332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.282 [2024-07-13 01:00:34.733092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.282 [2024-07-13 01:00:34.742607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.282 [2024-07-13 01:00:34.742956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.282 [2024-07-13 01:00:34.742973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.282 [2024-07-13 01:00:34.742981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.282 [2024-07-13 01:00:34.743143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.282 [2024-07-13 01:00:34.743333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.282 [2024-07-13 01:00:34.743344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.282 [2024-07-13 01:00:34.743350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.282 [2024-07-13 01:00:34.746013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.282 [2024-07-13 01:00:34.755466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.282 [2024-07-13 01:00:34.755887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.282 [2024-07-13 01:00:34.755903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.282 [2024-07-13 01:00:34.755910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.282 [2024-07-13 01:00:34.756072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.282 [2024-07-13 01:00:34.756242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.282 [2024-07-13 01:00:34.756252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.282 [2024-07-13 01:00:34.756259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.282 [2024-07-13 01:00:34.758851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.282 [2024-07-13 01:00:34.768583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.282 [2024-07-13 01:00:34.769070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.282 [2024-07-13 01:00:34.769087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.282 [2024-07-13 01:00:34.769094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.282 [2024-07-13 01:00:34.769271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.282 [2024-07-13 01:00:34.769443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.282 [2024-07-13 01:00:34.769456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.282 [2024-07-13 01:00:34.769463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.282 [2024-07-13 01:00:34.772065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.282 [2024-07-13 01:00:34.781476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.282 [2024-07-13 01:00:34.781825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.282 [2024-07-13 01:00:34.781842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.282 [2024-07-13 01:00:34.781848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.282 [2024-07-13 01:00:34.782011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.282 [2024-07-13 01:00:34.782173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.282 [2024-07-13 01:00:34.782183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.282 [2024-07-13 01:00:34.782189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.282 [2024-07-13 01:00:34.785040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.282 [2024-07-13 01:00:34.794493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.282 [2024-07-13 01:00:34.794804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.282 [2024-07-13 01:00:34.794847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.282 [2024-07-13 01:00:34.794869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.282 [2024-07-13 01:00:34.795391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.282 [2024-07-13 01:00:34.795555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.282 [2024-07-13 01:00:34.795565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.282 [2024-07-13 01:00:34.795571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.282 [2024-07-13 01:00:34.798296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.282 [2024-07-13 01:00:34.807380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.282 [2024-07-13 01:00:34.807786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.282 [2024-07-13 01:00:34.807830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.282 [2024-07-13 01:00:34.807852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.282 [2024-07-13 01:00:34.808446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.282 [2024-07-13 01:00:34.808882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.282 [2024-07-13 01:00:34.808892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.283 [2024-07-13 01:00:34.808899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.283 [2024-07-13 01:00:34.811628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.283 [2024-07-13 01:00:34.820295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.283 [2024-07-13 01:00:34.820724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.283 [2024-07-13 01:00:34.820766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.283 [2024-07-13 01:00:34.820789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.283 [2024-07-13 01:00:34.821370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.283 [2024-07-13 01:00:34.821761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.283 [2024-07-13 01:00:34.821779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.283 [2024-07-13 01:00:34.821793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.283 [2024-07-13 01:00:34.828019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.283 [2024-07-13 01:00:34.835200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.283 [2024-07-13 01:00:34.835707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.283 [2024-07-13 01:00:34.835729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.283 [2024-07-13 01:00:34.835740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.283 [2024-07-13 01:00:34.835991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.283 [2024-07-13 01:00:34.836254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.283 [2024-07-13 01:00:34.836268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.283 [2024-07-13 01:00:34.836278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.542 [2024-07-13 01:00:34.840339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.542 [2024-07-13 01:00:34.848243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.542 [2024-07-13 01:00:34.848671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.542 [2024-07-13 01:00:34.848718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.542 [2024-07-13 01:00:34.848741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.542 [2024-07-13 01:00:34.849335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.542 [2024-07-13 01:00:34.849873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.542 [2024-07-13 01:00:34.849882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.542 [2024-07-13 01:00:34.849889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.542 [2024-07-13 01:00:34.852588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.542 [2024-07-13 01:00:34.861094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.542 [2024-07-13 01:00:34.861526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.543 [2024-07-13 01:00:34.861568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.543 [2024-07-13 01:00:34.861590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.543 [2024-07-13 01:00:34.862179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.543 [2024-07-13 01:00:34.862777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.543 [2024-07-13 01:00:34.862804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.543 [2024-07-13 01:00:34.862824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.543 [2024-07-13 01:00:34.865506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.543 [2024-07-13 01:00:34.874013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.543 [2024-07-13 01:00:34.874410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.543 [2024-07-13 01:00:34.874427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.543 [2024-07-13 01:00:34.874434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.543 [2024-07-13 01:00:34.874596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.543 [2024-07-13 01:00:34.874759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.543 [2024-07-13 01:00:34.874768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.543 [2024-07-13 01:00:34.874775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.543 [2024-07-13 01:00:34.877469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.543 [2024-07-13 01:00:34.886836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.543 [2024-07-13 01:00:34.887258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.543 [2024-07-13 01:00:34.887274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.543 [2024-07-13 01:00:34.887281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.543 [2024-07-13 01:00:34.887444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.543 [2024-07-13 01:00:34.887607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.543 [2024-07-13 01:00:34.887616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.543 [2024-07-13 01:00:34.887623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.543 [2024-07-13 01:00:34.890312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.543 [2024-07-13 01:00:34.899728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.543 [2024-07-13 01:00:34.900135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.543 [2024-07-13 01:00:34.900178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.543 [2024-07-13 01:00:34.900201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.543 [2024-07-13 01:00:34.900652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.543 [2024-07-13 01:00:34.900827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.543 [2024-07-13 01:00:34.900837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.543 [2024-07-13 01:00:34.900846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.543 [2024-07-13 01:00:34.903568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.543 [2024-07-13 01:00:34.912624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.543 [2024-07-13 01:00:34.913055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.543 [2024-07-13 01:00:34.913072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.543 [2024-07-13 01:00:34.913079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.543 [2024-07-13 01:00:34.913258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.543 [2024-07-13 01:00:34.913432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.543 [2024-07-13 01:00:34.913442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.543 [2024-07-13 01:00:34.913448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.543 [2024-07-13 01:00:34.916104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.543 [2024-07-13 01:00:34.925533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.543 [2024-07-13 01:00:34.925857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.543 [2024-07-13 01:00:34.925874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.543 [2024-07-13 01:00:34.925880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.543 [2024-07-13 01:00:34.926044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.543 [2024-07-13 01:00:34.926206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.543 [2024-07-13 01:00:34.926216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.543 [2024-07-13 01:00:34.926222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.543 [2024-07-13 01:00:34.928980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.543 [2024-07-13 01:00:34.938418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.543 [2024-07-13 01:00:34.938831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.543 [2024-07-13 01:00:34.938848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.543 [2024-07-13 01:00:34.938855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.543 [2024-07-13 01:00:34.939018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.543 [2024-07-13 01:00:34.939182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.543 [2024-07-13 01:00:34.939191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.543 [2024-07-13 01:00:34.939197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.543 [2024-07-13 01:00:34.941832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.543 [2024-07-13 01:00:34.951259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.543 [2024-07-13 01:00:34.951651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.543 [2024-07-13 01:00:34.951670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.543 [2024-07-13 01:00:34.951677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.543 [2024-07-13 01:00:34.951839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.543 [2024-07-13 01:00:34.952002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.543 [2024-07-13 01:00:34.952012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.543 [2024-07-13 01:00:34.952018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.543 [2024-07-13 01:00:34.954708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.543 [2024-07-13 01:00:34.964071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.543 [2024-07-13 01:00:34.964507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.543 [2024-07-13 01:00:34.964549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.543 [2024-07-13 01:00:34.964571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.543 [2024-07-13 01:00:34.965069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.543 [2024-07-13 01:00:34.965239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.543 [2024-07-13 01:00:34.965265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.543 [2024-07-13 01:00:34.965273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.543 [2024-07-13 01:00:34.967939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.543 [2024-07-13 01:00:34.976997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.543 [2024-07-13 01:00:34.977359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.543 [2024-07-13 01:00:34.977376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.543 [2024-07-13 01:00:34.977383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.543 [2024-07-13 01:00:34.977546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.543 [2024-07-13 01:00:34.977710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.543 [2024-07-13 01:00:34.977719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.543 [2024-07-13 01:00:34.977725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.543 [2024-07-13 01:00:34.980420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.543 [2024-07-13 01:00:34.989900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.543 [2024-07-13 01:00:34.990238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.543 [2024-07-13 01:00:34.990254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.543 [2024-07-13 01:00:34.990261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.543 [2024-07-13 01:00:34.990424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.543 [2024-07-13 01:00:34.990590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.543 [2024-07-13 01:00:34.990601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.543 [2024-07-13 01:00:34.990607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.543 [2024-07-13 01:00:34.993298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.543 [2024-07-13 01:00:35.002723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.543 [2024-07-13 01:00:35.003143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.544 [2024-07-13 01:00:35.003160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.544 [2024-07-13 01:00:35.003167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.544 [2024-07-13 01:00:35.003357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.544 [2024-07-13 01:00:35.003531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.544 [2024-07-13 01:00:35.003541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.544 [2024-07-13 01:00:35.003547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.544 [2024-07-13 01:00:35.006198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.544 [2024-07-13 01:00:35.015558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.544 [2024-07-13 01:00:35.015975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.544 [2024-07-13 01:00:35.015992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.544 [2024-07-13 01:00:35.015999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.544 [2024-07-13 01:00:35.016162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.544 [2024-07-13 01:00:35.016351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.544 [2024-07-13 01:00:35.016361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.544 [2024-07-13 01:00:35.016368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.544 [2024-07-13 01:00:35.019189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.544 [2024-07-13 01:00:35.028661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.544 [2024-07-13 01:00:35.029018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.544 [2024-07-13 01:00:35.029035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.544 [2024-07-13 01:00:35.029042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.544 [2024-07-13 01:00:35.029214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.544 [2024-07-13 01:00:35.029393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.544 [2024-07-13 01:00:35.029403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.544 [2024-07-13 01:00:35.029410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.544 [2024-07-13 01:00:35.032015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.544 [2024-07-13 01:00:35.041590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.544 [2024-07-13 01:00:35.042022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.544 [2024-07-13 01:00:35.042039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.544 [2024-07-13 01:00:35.042046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.544 [2024-07-13 01:00:35.042218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.544 [2024-07-13 01:00:35.042397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.544 [2024-07-13 01:00:35.042407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.544 [2024-07-13 01:00:35.042413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.544 [2024-07-13 01:00:35.045154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.544 [2024-07-13 01:00:35.054463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.544 [2024-07-13 01:00:35.054831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.544 [2024-07-13 01:00:35.054874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.544 [2024-07-13 01:00:35.054895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.544 [2024-07-13 01:00:35.055352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.544 [2024-07-13 01:00:35.055527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.544 [2024-07-13 01:00:35.055537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.544 [2024-07-13 01:00:35.055543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.544 [2024-07-13 01:00:35.058197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.544 [2024-07-13 01:00:35.067306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.544 [2024-07-13 01:00:35.067723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.544 [2024-07-13 01:00:35.067739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.544 [2024-07-13 01:00:35.067746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.544 [2024-07-13 01:00:35.067909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.544 [2024-07-13 01:00:35.068072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.544 [2024-07-13 01:00:35.068081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.544 [2024-07-13 01:00:35.068087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.544 [2024-07-13 01:00:35.070776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.544 [2024-07-13 01:00:35.080144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.544 [2024-07-13 01:00:35.080570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.544 [2024-07-13 01:00:35.080587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.544 [2024-07-13 01:00:35.080597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.544 [2024-07-13 01:00:35.080760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.544 [2024-07-13 01:00:35.080924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.544 [2024-07-13 01:00:35.080933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.544 [2024-07-13 01:00:35.080939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.544 [2024-07-13 01:00:35.083628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.544 [2024-07-13 01:00:35.093309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.544 [2024-07-13 01:00:35.093740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.544 [2024-07-13 01:00:35.093757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.544 [2024-07-13 01:00:35.093765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.544 [2024-07-13 01:00:35.093942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.544 [2024-07-13 01:00:35.094120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.544 [2024-07-13 01:00:35.094129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.544 [2024-07-13 01:00:35.094136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.544 [2024-07-13 01:00:35.096967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.805 [2024-07-13 01:00:35.106497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.805 [2024-07-13 01:00:35.106939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.805 [2024-07-13 01:00:35.106956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.805 [2024-07-13 01:00:35.106964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.805 [2024-07-13 01:00:35.107142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.805 [2024-07-13 01:00:35.107326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.805 [2024-07-13 01:00:35.107336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.805 [2024-07-13 01:00:35.107343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.805 [2024-07-13 01:00:35.110166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.805 [2024-07-13 01:00:35.119679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.805 [2024-07-13 01:00:35.120092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.805 [2024-07-13 01:00:35.120109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.805 [2024-07-13 01:00:35.120117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.805 [2024-07-13 01:00:35.120299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.805 [2024-07-13 01:00:35.120477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.805 [2024-07-13 01:00:35.120490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.805 [2024-07-13 01:00:35.120497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.805 [2024-07-13 01:00:35.123328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.805 [2024-07-13 01:00:35.132844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.805 [2024-07-13 01:00:35.133279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.805 [2024-07-13 01:00:35.133297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.805 [2024-07-13 01:00:35.133305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.805 [2024-07-13 01:00:35.133482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.805 [2024-07-13 01:00:35.133660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.805 [2024-07-13 01:00:35.133669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.805 [2024-07-13 01:00:35.133675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.805 [2024-07-13 01:00:35.136505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.805 [2024-07-13 01:00:35.146026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.805 [2024-07-13 01:00:35.146462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.805 [2024-07-13 01:00:35.146479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.805 [2024-07-13 01:00:35.146486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.805 [2024-07-13 01:00:35.146664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.805 [2024-07-13 01:00:35.146842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.805 [2024-07-13 01:00:35.146852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.805 [2024-07-13 01:00:35.146858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.805 [2024-07-13 01:00:35.149701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.805 [2024-07-13 01:00:35.159066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.805 [2024-07-13 01:00:35.159429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.805 [2024-07-13 01:00:35.159446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.805 [2024-07-13 01:00:35.159454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.805 [2024-07-13 01:00:35.159630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.805 [2024-07-13 01:00:35.159809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.805 [2024-07-13 01:00:35.159820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.805 [2024-07-13 01:00:35.159826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.805 [2024-07-13 01:00:35.162658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.805 [2024-07-13 01:00:35.172178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.805 [2024-07-13 01:00:35.172617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.805 [2024-07-13 01:00:35.172634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.805 [2024-07-13 01:00:35.172642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.805 [2024-07-13 01:00:35.172818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.805 [2024-07-13 01:00:35.172996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.805 [2024-07-13 01:00:35.173005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.805 [2024-07-13 01:00:35.173012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.805 [2024-07-13 01:00:35.175844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.805 [2024-07-13 01:00:35.185372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.805 [2024-07-13 01:00:35.185802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.805 [2024-07-13 01:00:35.185819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.805 [2024-07-13 01:00:35.185827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.805 [2024-07-13 01:00:35.186004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.805 [2024-07-13 01:00:35.186182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.805 [2024-07-13 01:00:35.186192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.805 [2024-07-13 01:00:35.186198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.805 [2024-07-13 01:00:35.189032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.805 [2024-07-13 01:00:35.198551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.805 [2024-07-13 01:00:35.198964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.805 [2024-07-13 01:00:35.198982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.805 [2024-07-13 01:00:35.198989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.805 [2024-07-13 01:00:35.199167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.805 [2024-07-13 01:00:35.199353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.805 [2024-07-13 01:00:35.199363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.805 [2024-07-13 01:00:35.199370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.805 [2024-07-13 01:00:35.202194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.805 [2024-07-13 01:00:35.211711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.805 [2024-07-13 01:00:35.212142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.805 [2024-07-13 01:00:35.212159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.805 [2024-07-13 01:00:35.212167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.805 [2024-07-13 01:00:35.212353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.805 [2024-07-13 01:00:35.212532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.805 [2024-07-13 01:00:35.212541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.805 [2024-07-13 01:00:35.212548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.805 [2024-07-13 01:00:35.215375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.805 [2024-07-13 01:00:35.224892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.805 [2024-07-13 01:00:35.225321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.806 [2024-07-13 01:00:35.225338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.806 [2024-07-13 01:00:35.225345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.806 [2024-07-13 01:00:35.225523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.806 [2024-07-13 01:00:35.225701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.806 [2024-07-13 01:00:35.225712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.806 [2024-07-13 01:00:35.225718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.806 [2024-07-13 01:00:35.228544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.806 [2024-07-13 01:00:35.238065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.806 [2024-07-13 01:00:35.238505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.806 [2024-07-13 01:00:35.238523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.806 [2024-07-13 01:00:35.238531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.806 [2024-07-13 01:00:35.238708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.806 [2024-07-13 01:00:35.238887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.806 [2024-07-13 01:00:35.238897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.806 [2024-07-13 01:00:35.238904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.806 [2024-07-13 01:00:35.241738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.806 [2024-07-13 01:00:35.251099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.806 [2024-07-13 01:00:35.251520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.806 [2024-07-13 01:00:35.251561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.806 [2024-07-13 01:00:35.251583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.806 [2024-07-13 01:00:35.252163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.806 [2024-07-13 01:00:35.252709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.806 [2024-07-13 01:00:35.252720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.806 [2024-07-13 01:00:35.252730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.806 [2024-07-13 01:00:35.255516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.806 [2024-07-13 01:00:35.264181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.806 [2024-07-13 01:00:35.264534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.806 [2024-07-13 01:00:35.264579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.806 [2024-07-13 01:00:35.264602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.806 [2024-07-13 01:00:35.265181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.806 [2024-07-13 01:00:35.265723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.806 [2024-07-13 01:00:35.265733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.806 [2024-07-13 01:00:35.265739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.806 [2024-07-13 01:00:35.268512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.806 [2024-07-13 01:00:35.277238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.806 [2024-07-13 01:00:35.277595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.806 [2024-07-13 01:00:35.277612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.806 [2024-07-13 01:00:35.277620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.806 [2024-07-13 01:00:35.277793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.806 [2024-07-13 01:00:35.277965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.806 [2024-07-13 01:00:35.277974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.806 [2024-07-13 01:00:35.277981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.806 [2024-07-13 01:00:35.280723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.806 [2024-07-13 01:00:35.290235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.806 [2024-07-13 01:00:35.290656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.806 [2024-07-13 01:00:35.290673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.806 [2024-07-13 01:00:35.290680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.806 [2024-07-13 01:00:35.290842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.806 [2024-07-13 01:00:35.291006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.806 [2024-07-13 01:00:35.291015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.806 [2024-07-13 01:00:35.291021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.806 [2024-07-13 01:00:35.293718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.806 [2024-07-13 01:00:35.303181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.806 [2024-07-13 01:00:35.303562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.806 [2024-07-13 01:00:35.303582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.806 [2024-07-13 01:00:35.303589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.806 [2024-07-13 01:00:35.303751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.806 [2024-07-13 01:00:35.303914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.806 [2024-07-13 01:00:35.303923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.806 [2024-07-13 01:00:35.303929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.806 [2024-07-13 01:00:35.306623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.806 [2024-07-13 01:00:35.316080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.806 [2024-07-13 01:00:35.316425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.806 [2024-07-13 01:00:35.316443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.806 [2024-07-13 01:00:35.316450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.806 [2024-07-13 01:00:35.316625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.806 [2024-07-13 01:00:35.316789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.806 [2024-07-13 01:00:35.316798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.806 [2024-07-13 01:00:35.316804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.806 [2024-07-13 01:00:35.319429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.806 [2024-07-13 01:00:35.329152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.806 [2024-07-13 01:00:35.329440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.806 [2024-07-13 01:00:35.329458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.806 [2024-07-13 01:00:35.329466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.806 [2024-07-13 01:00:35.329638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.806 [2024-07-13 01:00:35.329813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.806 [2024-07-13 01:00:35.329822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.806 [2024-07-13 01:00:35.329828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.806 [2024-07-13 01:00:35.332456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.806 [2024-07-13 01:00:35.342114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.806 [2024-07-13 01:00:35.342463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.806 [2024-07-13 01:00:35.342480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.806 [2024-07-13 01:00:35.342486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.806 [2024-07-13 01:00:35.342649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.806 [2024-07-13 01:00:35.342815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.806 [2024-07-13 01:00:35.342824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.806 [2024-07-13 01:00:35.342830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.806 [2024-07-13 01:00:35.345524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.806 [2024-07-13 01:00:35.355071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.806 [2024-07-13 01:00:35.355442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.806 [2024-07-13 01:00:35.355485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:23.806 [2024-07-13 01:00:35.355509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:23.807 [2024-07-13 01:00:35.355987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:23.807 [2024-07-13 01:00:35.356161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.807 [2024-07-13 01:00:35.356171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.807 [2024-07-13 01:00:35.356177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.807 [2024-07-13 01:00:35.358808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.067 [2024-07-13 01:00:35.368120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.067 [2024-07-13 01:00:35.368440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.067 [2024-07-13 01:00:35.368485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.067 [2024-07-13 01:00:35.368507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.067 [2024-07-13 01:00:35.369009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.067 [2024-07-13 01:00:35.369194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.067 [2024-07-13 01:00:35.369204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.067 [2024-07-13 01:00:35.369210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.067 [2024-07-13 01:00:35.371918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.067 [2024-07-13 01:00:35.381090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.067 [2024-07-13 01:00:35.381450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.067 [2024-07-13 01:00:35.381467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.067 [2024-07-13 01:00:35.381475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.067 [2024-07-13 01:00:35.381648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.067 [2024-07-13 01:00:35.381820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.067 [2024-07-13 01:00:35.381829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.067 [2024-07-13 01:00:35.381836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.067 [2024-07-13 01:00:35.384536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.067 [2024-07-13 01:00:35.394000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.067 [2024-07-13 01:00:35.394391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.067 [2024-07-13 01:00:35.394425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.067 [2024-07-13 01:00:35.394433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.067 [2024-07-13 01:00:35.394609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.067 [2024-07-13 01:00:35.394787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.067 [2024-07-13 01:00:35.394796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.067 [2024-07-13 01:00:35.394803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.067 [2024-07-13 01:00:35.397635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.067 [2024-07-13 01:00:35.407163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.067 [2024-07-13 01:00:35.407460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.068 [2024-07-13 01:00:35.407479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.068 [2024-07-13 01:00:35.407486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.068 [2024-07-13 01:00:35.407662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.068 [2024-07-13 01:00:35.407842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.068 [2024-07-13 01:00:35.407852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.068 [2024-07-13 01:00:35.407858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.068 [2024-07-13 01:00:35.410692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.068 [2024-07-13 01:00:35.420272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.068 [2024-07-13 01:00:35.420684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.068 [2024-07-13 01:00:35.420702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.068 [2024-07-13 01:00:35.420710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.068 [2024-07-13 01:00:35.420887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.068 [2024-07-13 01:00:35.421065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.068 [2024-07-13 01:00:35.421075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.068 [2024-07-13 01:00:35.421082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.068 [2024-07-13 01:00:35.423977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.068 [2024-07-13 01:00:35.433344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.068 [2024-07-13 01:00:35.433776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.068 [2024-07-13 01:00:35.433794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.068 [2024-07-13 01:00:35.433804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.068 [2024-07-13 01:00:35.433983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.068 [2024-07-13 01:00:35.434162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.068 [2024-07-13 01:00:35.434172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.068 [2024-07-13 01:00:35.434179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.068 [2024-07-13 01:00:35.437015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.068 [2024-07-13 01:00:35.446542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.068 [2024-07-13 01:00:35.446981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.068 [2024-07-13 01:00:35.446999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.068 [2024-07-13 01:00:35.447007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.068 [2024-07-13 01:00:35.447184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.068 [2024-07-13 01:00:35.447371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.068 [2024-07-13 01:00:35.447382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.068 [2024-07-13 01:00:35.447388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.068 [2024-07-13 01:00:35.450213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.068 [2024-07-13 01:00:35.459741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.068 [2024-07-13 01:00:35.460070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.068 [2024-07-13 01:00:35.460087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.068 [2024-07-13 01:00:35.460094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.068 [2024-07-13 01:00:35.460279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.068 [2024-07-13 01:00:35.460457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.068 [2024-07-13 01:00:35.460467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.068 [2024-07-13 01:00:35.460473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.068 [2024-07-13 01:00:35.463314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.068 [2024-07-13 01:00:35.473009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.068 [2024-07-13 01:00:35.473448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.068 [2024-07-13 01:00:35.473466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.068 [2024-07-13 01:00:35.473474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.068 [2024-07-13 01:00:35.473656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.068 [2024-07-13 01:00:35.473840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.068 [2024-07-13 01:00:35.473853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.068 [2024-07-13 01:00:35.473859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.068 [2024-07-13 01:00:35.476781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.068 [2024-07-13 01:00:35.486126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.068 [2024-07-13 01:00:35.486567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.068 [2024-07-13 01:00:35.486586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.068 [2024-07-13 01:00:35.486593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.068 [2024-07-13 01:00:35.486770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.068 [2024-07-13 01:00:35.486948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.068 [2024-07-13 01:00:35.486958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.068 [2024-07-13 01:00:35.486965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.068 [2024-07-13 01:00:35.489799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.068 [2024-07-13 01:00:35.499328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.068 [2024-07-13 01:00:35.499707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.068 [2024-07-13 01:00:35.499724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.068 [2024-07-13 01:00:35.499731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.068 [2024-07-13 01:00:35.499908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.068 [2024-07-13 01:00:35.500086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.068 [2024-07-13 01:00:35.500097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.068 [2024-07-13 01:00:35.500104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.068 [2024-07-13 01:00:35.502941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.068 [2024-07-13 01:00:35.512572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.068 [2024-07-13 01:00:35.512994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.068 [2024-07-13 01:00:35.513011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.068 [2024-07-13 01:00:35.513019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.068 [2024-07-13 01:00:35.513202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.068 [2024-07-13 01:00:35.513393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.068 [2024-07-13 01:00:35.513403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.068 [2024-07-13 01:00:35.513410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.068 [2024-07-13 01:00:35.516329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.068 [2024-07-13 01:00:35.525838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.068 [2024-07-13 01:00:35.526219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.068 [2024-07-13 01:00:35.526244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.068 [2024-07-13 01:00:35.526251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.068 [2024-07-13 01:00:35.526452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.068 [2024-07-13 01:00:35.526636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.068 [2024-07-13 01:00:35.526647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.068 [2024-07-13 01:00:35.526654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.068 [2024-07-13 01:00:35.529550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.068 [2024-07-13 01:00:35.538912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.068 [2024-07-13 01:00:35.539348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.068 [2024-07-13 01:00:35.539366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.068 [2024-07-13 01:00:35.539374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.068 [2024-07-13 01:00:35.539552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.068 [2024-07-13 01:00:35.539731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.068 [2024-07-13 01:00:35.539741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.068 [2024-07-13 01:00:35.539748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.068 [2024-07-13 01:00:35.542579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.068 [2024-07-13 01:00:35.552109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.068 [2024-07-13 01:00:35.552551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.068 [2024-07-13 01:00:35.552568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.068 [2024-07-13 01:00:35.552576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.068 [2024-07-13 01:00:35.552754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.068 [2024-07-13 01:00:35.552933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.068 [2024-07-13 01:00:35.552943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.068 [2024-07-13 01:00:35.552950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.068 [2024-07-13 01:00:35.555783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.068 [2024-07-13 01:00:35.565197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.068 [2024-07-13 01:00:35.565606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.068 [2024-07-13 01:00:35.565649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.068 [2024-07-13 01:00:35.565671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.068 [2024-07-13 01:00:35.566268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.068 [2024-07-13 01:00:35.566471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.068 [2024-07-13 01:00:35.566481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.068 [2024-07-13 01:00:35.566487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.068 [2024-07-13 01:00:35.569233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.068 [2024-07-13 01:00:35.578240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.068 [2024-07-13 01:00:35.578667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.068 [2024-07-13 01:00:35.578684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.068 [2024-07-13 01:00:35.578691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.068 [2024-07-13 01:00:35.578862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.068 [2024-07-13 01:00:35.579035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.069 [2024-07-13 01:00:35.579044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.069 [2024-07-13 01:00:35.579051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.069 [2024-07-13 01:00:35.581801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.069 [2024-07-13 01:00:35.591142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.069 [2024-07-13 01:00:35.591566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.069 [2024-07-13 01:00:35.591609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.069 [2024-07-13 01:00:35.591631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.069 [2024-07-13 01:00:35.592212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.069 [2024-07-13 01:00:35.592517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.069 [2024-07-13 01:00:35.592527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.069 [2024-07-13 01:00:35.592533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.069 [2024-07-13 01:00:35.595187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.069 [2024-07-13 01:00:35.604014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.069 [2024-07-13 01:00:35.604433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.069 [2024-07-13 01:00:35.604450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.069 [2024-07-13 01:00:35.604456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.069 [2024-07-13 01:00:35.604619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.069 [2024-07-13 01:00:35.604782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.069 [2024-07-13 01:00:35.604792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.069 [2024-07-13 01:00:35.604801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.069 [2024-07-13 01:00:35.607491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.069 [2024-07-13 01:00:35.616835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.069 [2024-07-13 01:00:35.617267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.069 [2024-07-13 01:00:35.617310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.069 [2024-07-13 01:00:35.617333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.069 [2024-07-13 01:00:35.617594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.069 [2024-07-13 01:00:35.617758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.069 [2024-07-13 01:00:35.617767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.069 [2024-07-13 01:00:35.617773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.069 [2024-07-13 01:00:35.620507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.329 [2024-07-13 01:00:35.629858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.329 [2024-07-13 01:00:35.630267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.329 [2024-07-13 01:00:35.630284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.329 [2024-07-13 01:00:35.630291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.329 [2024-07-13 01:00:35.630454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.329 [2024-07-13 01:00:35.630617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.329 [2024-07-13 01:00:35.630626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.329 [2024-07-13 01:00:35.630632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.329 [2024-07-13 01:00:35.633319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.329 [2024-07-13 01:00:35.642650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.329 [2024-07-13 01:00:35.643067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.329 [2024-07-13 01:00:35.643111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.329 [2024-07-13 01:00:35.643134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.329 [2024-07-13 01:00:35.643654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.330 [2024-07-13 01:00:35.643828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.330 [2024-07-13 01:00:35.643838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.330 [2024-07-13 01:00:35.643844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.330 [2024-07-13 01:00:35.646532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.330 [2024-07-13 01:00:35.655445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.330 [2024-07-13 01:00:35.655760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.330 [2024-07-13 01:00:35.655780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.330 [2024-07-13 01:00:35.655787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.330 [2024-07-13 01:00:35.655950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.330 [2024-07-13 01:00:35.656112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.330 [2024-07-13 01:00:35.656121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.330 [2024-07-13 01:00:35.656127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.330 [2024-07-13 01:00:35.658817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.330 [2024-07-13 01:00:35.668372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.330 [2024-07-13 01:00:35.668801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.330 [2024-07-13 01:00:35.668843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.330 [2024-07-13 01:00:35.668865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.330 [2024-07-13 01:00:35.669464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.330 [2024-07-13 01:00:35.669629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.330 [2024-07-13 01:00:35.669638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.330 [2024-07-13 01:00:35.669644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.330 [2024-07-13 01:00:35.672236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.330 [2024-07-13 01:00:35.681188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.330 [2024-07-13 01:00:35.681629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.330 [2024-07-13 01:00:35.681645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.330 [2024-07-13 01:00:35.681653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.330 [2024-07-13 01:00:35.681816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.330 [2024-07-13 01:00:35.681979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.330 [2024-07-13 01:00:35.681988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.330 [2024-07-13 01:00:35.681995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.330 [2024-07-13 01:00:35.684874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.330 [2024-07-13 01:00:35.694061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.330 [2024-07-13 01:00:35.694413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.330 [2024-07-13 01:00:35.694430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.330 [2024-07-13 01:00:35.694437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.330 [2024-07-13 01:00:35.694601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.330 [2024-07-13 01:00:35.694771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.330 [2024-07-13 01:00:35.694780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.330 [2024-07-13 01:00:35.694786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.330 [2024-07-13 01:00:35.697476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.330 [2024-07-13 01:00:35.706901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.330 [2024-07-13 01:00:35.707325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.330 [2024-07-13 01:00:35.707368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.330 [2024-07-13 01:00:35.707391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.330 [2024-07-13 01:00:35.707912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.330 [2024-07-13 01:00:35.708076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.330 [2024-07-13 01:00:35.708085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.330 [2024-07-13 01:00:35.708091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.330 [2024-07-13 01:00:35.710780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.330 [2024-07-13 01:00:35.719732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.330 [2024-07-13 01:00:35.720129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.330 [2024-07-13 01:00:35.720146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.330 [2024-07-13 01:00:35.720153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.330 [2024-07-13 01:00:35.720342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.330 [2024-07-13 01:00:35.720516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.330 [2024-07-13 01:00:35.720525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.330 [2024-07-13 01:00:35.720531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.330 [2024-07-13 01:00:35.723186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.330 [2024-07-13 01:00:35.732517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.330 [2024-07-13 01:00:35.732951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.330 [2024-07-13 01:00:35.732993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.330 [2024-07-13 01:00:35.733015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.330 [2024-07-13 01:00:35.733611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.330 [2024-07-13 01:00:35.733855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.330 [2024-07-13 01:00:35.733865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.330 [2024-07-13 01:00:35.733872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.330 [2024-07-13 01:00:35.736561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.330 [2024-07-13 01:00:35.745372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.330 [2024-07-13 01:00:35.745708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.330 [2024-07-13 01:00:35.745724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.330 [2024-07-13 01:00:35.745731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.330 [2024-07-13 01:00:35.745893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.330 [2024-07-13 01:00:35.746056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.330 [2024-07-13 01:00:35.746065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.330 [2024-07-13 01:00:35.746071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.330 [2024-07-13 01:00:35.748760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.330 [2024-07-13 01:00:35.758246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.330 [2024-07-13 01:00:35.758594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.330 [2024-07-13 01:00:35.758611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.330 [2024-07-13 01:00:35.758618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.330 [2024-07-13 01:00:35.758780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.331 [2024-07-13 01:00:35.758944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.331 [2024-07-13 01:00:35.758953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.331 [2024-07-13 01:00:35.758960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.331 [2024-07-13 01:00:35.761648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.331 [2024-07-13 01:00:35.771227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.331 [2024-07-13 01:00:35.771651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.331 [2024-07-13 01:00:35.771667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.331 [2024-07-13 01:00:35.771674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.331 [2024-07-13 01:00:35.771837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.331 [2024-07-13 01:00:35.772001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.331 [2024-07-13 01:00:35.772009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.331 [2024-07-13 01:00:35.772015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.331 [2024-07-13 01:00:35.774853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.331 [2024-07-13 01:00:35.784262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.331 [2024-07-13 01:00:35.784631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.331 [2024-07-13 01:00:35.784674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.331 [2024-07-13 01:00:35.784705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.331 [2024-07-13 01:00:35.785190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.331 [2024-07-13 01:00:35.785359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.331 [2024-07-13 01:00:35.785369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.331 [2024-07-13 01:00:35.785375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.331 [2024-07-13 01:00:35.788060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.331 [2024-07-13 01:00:35.797125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.331 [2024-07-13 01:00:35.797446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.331 [2024-07-13 01:00:35.797462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.331 [2024-07-13 01:00:35.797470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.331 [2024-07-13 01:00:35.797634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.331 [2024-07-13 01:00:35.797796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.331 [2024-07-13 01:00:35.797805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.331 [2024-07-13 01:00:35.797812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.331 [2024-07-13 01:00:35.800502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.331 [2024-07-13 01:00:35.810262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.331 [2024-07-13 01:00:35.810679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.331 [2024-07-13 01:00:35.810696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.331 [2024-07-13 01:00:35.810703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.331 [2024-07-13 01:00:35.810876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.331 [2024-07-13 01:00:35.811050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.331 [2024-07-13 01:00:35.811060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.331 [2024-07-13 01:00:35.811066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.331 [2024-07-13 01:00:35.813766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.331 [2024-07-13 01:00:35.823261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.331 [2024-07-13 01:00:35.823678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.331 [2024-07-13 01:00:35.823694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.331 [2024-07-13 01:00:35.823702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.331 [2024-07-13 01:00:35.823864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.331 [2024-07-13 01:00:35.824027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.331 [2024-07-13 01:00:35.824039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.331 [2024-07-13 01:00:35.824046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.331 [2024-07-13 01:00:35.826734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.331 [2024-07-13 01:00:35.836199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.331 [2024-07-13 01:00:35.836608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.331 [2024-07-13 01:00:35.836657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.331 [2024-07-13 01:00:35.836679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.331 [2024-07-13 01:00:35.837218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.331 [2024-07-13 01:00:35.837388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.331 [2024-07-13 01:00:35.837398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.331 [2024-07-13 01:00:35.837403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.331 [2024-07-13 01:00:35.840087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.331 [2024-07-13 01:00:35.849054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.331 [2024-07-13 01:00:35.849428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.331 [2024-07-13 01:00:35.849444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.331 [2024-07-13 01:00:35.849452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.331 [2024-07-13 01:00:35.849627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.331 [2024-07-13 01:00:35.849791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.331 [2024-07-13 01:00:35.849800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.331 [2024-07-13 01:00:35.849805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.331 [2024-07-13 01:00:35.852424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.331 [2024-07-13 01:00:35.862012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.331 [2024-07-13 01:00:35.862416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.331 [2024-07-13 01:00:35.862432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.331 [2024-07-13 01:00:35.862439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.331 [2024-07-13 01:00:35.862611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.331 [2024-07-13 01:00:35.862784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.331 [2024-07-13 01:00:35.862793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.331 [2024-07-13 01:00:35.862800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.331 [2024-07-13 01:00:35.865567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.331 [2024-07-13 01:00:35.874837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.331 [2024-07-13 01:00:35.875236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.332 [2024-07-13 01:00:35.875268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.332 [2024-07-13 01:00:35.875275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.332 [2024-07-13 01:00:35.875447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.332 [2024-07-13 01:00:35.875620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.332 [2024-07-13 01:00:35.875630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.332 [2024-07-13 01:00:35.875636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.332 [2024-07-13 01:00:35.878343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.592 [2024-07-13 01:00:35.887735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.592 [2024-07-13 01:00:35.888170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.592 [2024-07-13 01:00:35.888186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.592 [2024-07-13 01:00:35.888194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.592 [2024-07-13 01:00:35.888373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.592 [2024-07-13 01:00:35.888547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.592 [2024-07-13 01:00:35.888557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.592 [2024-07-13 01:00:35.888563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.592 [2024-07-13 01:00:35.891214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.592 [2024-07-13 01:00:35.900585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.592 [2024-07-13 01:00:35.901006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.592 [2024-07-13 01:00:35.901023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.592 [2024-07-13 01:00:35.901030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.592 [2024-07-13 01:00:35.901193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.592 [2024-07-13 01:00:35.901383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.592 [2024-07-13 01:00:35.901393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.592 [2024-07-13 01:00:35.901400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.592 [2024-07-13 01:00:35.904058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.592 [2024-07-13 01:00:35.913479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.592 [2024-07-13 01:00:35.913898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.592 [2024-07-13 01:00:35.913914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.592 [2024-07-13 01:00:35.913921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.592 [2024-07-13 01:00:35.914087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.592 [2024-07-13 01:00:35.914271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.592 [2024-07-13 01:00:35.914294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.592 [2024-07-13 01:00:35.914300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.592 [2024-07-13 01:00:35.916889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.592 [2024-07-13 01:00:35.926373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.592 [2024-07-13 01:00:35.926795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.592 [2024-07-13 01:00:35.926810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.592 [2024-07-13 01:00:35.926817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.592 [2024-07-13 01:00:35.926980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.592 [2024-07-13 01:00:35.927143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.592 [2024-07-13 01:00:35.927151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.592 [2024-07-13 01:00:35.927157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.592 [2024-07-13 01:00:35.929924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.592 [2024-07-13 01:00:35.939301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.592 [2024-07-13 01:00:35.939717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.592 [2024-07-13 01:00:35.939733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.592 [2024-07-13 01:00:35.939740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.592 [2024-07-13 01:00:35.939904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.592 [2024-07-13 01:00:35.940066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.592 [2024-07-13 01:00:35.940076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.592 [2024-07-13 01:00:35.940082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.592 [2024-07-13 01:00:35.942771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.592 [2024-07-13 01:00:35.952101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.592 [2024-07-13 01:00:35.952512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.592 [2024-07-13 01:00:35.952556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.592 [2024-07-13 01:00:35.952578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.592 [2024-07-13 01:00:35.953164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.592 [2024-07-13 01:00:35.953533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.592 [2024-07-13 01:00:35.953543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.592 [2024-07-13 01:00:35.953552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.592 [2024-07-13 01:00:35.956204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.592 [2024-07-13 01:00:35.965109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.592 [2024-07-13 01:00:35.965529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.592 [2024-07-13 01:00:35.965546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.592 [2024-07-13 01:00:35.965552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.592 [2024-07-13 01:00:35.965714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.592 [2024-07-13 01:00:35.965878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.592 [2024-07-13 01:00:35.965887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.592 [2024-07-13 01:00:35.965893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.593 [2024-07-13 01:00:35.968582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.593 [2024-07-13 01:00:35.978079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.593 [2024-07-13 01:00:35.978431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.593 [2024-07-13 01:00:35.978446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.593 [2024-07-13 01:00:35.978453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.593 [2024-07-13 01:00:35.978616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.593 [2024-07-13 01:00:35.978779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.593 [2024-07-13 01:00:35.978789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.593 [2024-07-13 01:00:35.978795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.593 [2024-07-13 01:00:35.981486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.593 [2024-07-13 01:00:35.990968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.593 [2024-07-13 01:00:35.991307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.593 [2024-07-13 01:00:35.991323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.593 [2024-07-13 01:00:35.991331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.593 [2024-07-13 01:00:35.991494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.593 [2024-07-13 01:00:35.991657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.593 [2024-07-13 01:00:35.991666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.593 [2024-07-13 01:00:35.991672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.593 [2024-07-13 01:00:35.994360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.593 [2024-07-13 01:00:36.003785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.593 [2024-07-13 01:00:36.004126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.593 [2024-07-13 01:00:36.004144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.593 [2024-07-13 01:00:36.004151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.593 [2024-07-13 01:00:36.004338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.593 [2024-07-13 01:00:36.004511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.593 [2024-07-13 01:00:36.004521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.593 [2024-07-13 01:00:36.004527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.593 [2024-07-13 01:00:36.007180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.593 [2024-07-13 01:00:36.016701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.593 [2024-07-13 01:00:36.017023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.593 [2024-07-13 01:00:36.017039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.593 [2024-07-13 01:00:36.017046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.593 [2024-07-13 01:00:36.017209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.593 [2024-07-13 01:00:36.017401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.593 [2024-07-13 01:00:36.017411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.593 [2024-07-13 01:00:36.017417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.593 [2024-07-13 01:00:36.020076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.593 [2024-07-13 01:00:36.029497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.593 [2024-07-13 01:00:36.029879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.593 [2024-07-13 01:00:36.029895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.593 [2024-07-13 01:00:36.029902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.593 [2024-07-13 01:00:36.030065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.593 [2024-07-13 01:00:36.030233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.593 [2024-07-13 01:00:36.030243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.593 [2024-07-13 01:00:36.030265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.593 [2024-07-13 01:00:36.033098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.593 [2024-07-13 01:00:36.042445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.593 [2024-07-13 01:00:36.042774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.593 [2024-07-13 01:00:36.042790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.593 [2024-07-13 01:00:36.042797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.593 [2024-07-13 01:00:36.042959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.593 [2024-07-13 01:00:36.043125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.593 [2024-07-13 01:00:36.043134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.593 [2024-07-13 01:00:36.043140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.593 [2024-07-13 01:00:36.045773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.593 [2024-07-13 01:00:36.055349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.593 [2024-07-13 01:00:36.055745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.593 [2024-07-13 01:00:36.055762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.593 [2024-07-13 01:00:36.055769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.593 [2024-07-13 01:00:36.055930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.593 [2024-07-13 01:00:36.056093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.593 [2024-07-13 01:00:36.056103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.593 [2024-07-13 01:00:36.056109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.593 [2024-07-13 01:00:36.058796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.593 [2024-07-13 01:00:36.068159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.593 [2024-07-13 01:00:36.068580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.593 [2024-07-13 01:00:36.068597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.593 [2024-07-13 01:00:36.068603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.593 [2024-07-13 01:00:36.068765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.593 [2024-07-13 01:00:36.068928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.593 [2024-07-13 01:00:36.068937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.593 [2024-07-13 01:00:36.068943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.593 [2024-07-13 01:00:36.071635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.593 [2024-07-13 01:00:36.080969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.593 [2024-07-13 01:00:36.081394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.593 [2024-07-13 01:00:36.081411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.593 [2024-07-13 01:00:36.081418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.594 [2024-07-13 01:00:36.081581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.594 [2024-07-13 01:00:36.081743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.594 [2024-07-13 01:00:36.081752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.594 [2024-07-13 01:00:36.081758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.594 [2024-07-13 01:00:36.084492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.594 [2024-07-13 01:00:36.093822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.594 [2024-07-13 01:00:36.094236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.594 [2024-07-13 01:00:36.094252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.594 [2024-07-13 01:00:36.094259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.594 [2024-07-13 01:00:36.094423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.594 [2024-07-13 01:00:36.094586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.594 [2024-07-13 01:00:36.094595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.594 [2024-07-13 01:00:36.094601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.594 [2024-07-13 01:00:36.097289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.594 [2024-07-13 01:00:36.106902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.594 [2024-07-13 01:00:36.107311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.594 [2024-07-13 01:00:36.107327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.594 [2024-07-13 01:00:36.107335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.594 [2024-07-13 01:00:36.107513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.594 [2024-07-13 01:00:36.107677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.594 [2024-07-13 01:00:36.107686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.594 [2024-07-13 01:00:36.107692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.594 [2024-07-13 01:00:36.110423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.594 [2024-07-13 01:00:36.119833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.594 [2024-07-13 01:00:36.120272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.594 [2024-07-13 01:00:36.120315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.594 [2024-07-13 01:00:36.120349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.594 [2024-07-13 01:00:36.120820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.594 [2024-07-13 01:00:36.120985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.594 [2024-07-13 01:00:36.120994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.594 [2024-07-13 01:00:36.121002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.594 [2024-07-13 01:00:36.123593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.594 [2024-07-13 01:00:36.132620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.594 [2024-07-13 01:00:36.133043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.594 [2024-07-13 01:00:36.133085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.594 [2024-07-13 01:00:36.133114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.594 [2024-07-13 01:00:36.133634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.594 [2024-07-13 01:00:36.133808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.594 [2024-07-13 01:00:36.133818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.594 [2024-07-13 01:00:36.133824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.594 [2024-07-13 01:00:36.136515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.594 [2024-07-13 01:00:36.145548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.594 [2024-07-13 01:00:36.145872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.594 [2024-07-13 01:00:36.145887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.594 [2024-07-13 01:00:36.145894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.594 [2024-07-13 01:00:36.146057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.594 [2024-07-13 01:00:36.146220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.594 [2024-07-13 01:00:36.146234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.594 [2024-07-13 01:00:36.146241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.594 [2024-07-13 01:00:36.148925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.854 [2024-07-13 01:00:36.158385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.854 [2024-07-13 01:00:36.158712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.854 [2024-07-13 01:00:36.158728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.854 [2024-07-13 01:00:36.158735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.854 [2024-07-13 01:00:36.158898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.854 [2024-07-13 01:00:36.159062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.854 [2024-07-13 01:00:36.159071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.854 [2024-07-13 01:00:36.159077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.854 [2024-07-13 01:00:36.161760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.854 [2024-07-13 01:00:36.171308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.854 [2024-07-13 01:00:36.171736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.854 [2024-07-13 01:00:36.171778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.854 [2024-07-13 01:00:36.171800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.854 [2024-07-13 01:00:36.172237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.854 [2024-07-13 01:00:36.172402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.854 [2024-07-13 01:00:36.172414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.854 [2024-07-13 01:00:36.172421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.854 [2024-07-13 01:00:36.175009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.854 [2024-07-13 01:00:36.184093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.854 [2024-07-13 01:00:36.184423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.854 [2024-07-13 01:00:36.184439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.854 [2024-07-13 01:00:36.184447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.854 [2024-07-13 01:00:36.184610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.854 [2024-07-13 01:00:36.184773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.854 [2024-07-13 01:00:36.184783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.854 [2024-07-13 01:00:36.184789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.854 [2024-07-13 01:00:36.187478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.854 [2024-07-13 01:00:36.196959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.854 [2024-07-13 01:00:36.197295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.854 [2024-07-13 01:00:36.197311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.854 [2024-07-13 01:00:36.197319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.854 [2024-07-13 01:00:36.197482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.854 [2024-07-13 01:00:36.197645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.854 [2024-07-13 01:00:36.197654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.854 [2024-07-13 01:00:36.197660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.854 [2024-07-13 01:00:36.200350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.854 [2024-07-13 01:00:36.209858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.854 [2024-07-13 01:00:36.210261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.854 [2024-07-13 01:00:36.210278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.854 [2024-07-13 01:00:36.210285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.854 [2024-07-13 01:00:36.210448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.855 [2024-07-13 01:00:36.210611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.855 [2024-07-13 01:00:36.210620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.855 [2024-07-13 01:00:36.210627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.855 [2024-07-13 01:00:36.213317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.855 [2024-07-13 01:00:36.222648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.855 [2024-07-13 01:00:36.222987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.855 [2024-07-13 01:00:36.223003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.855 [2024-07-13 01:00:36.223009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.855 [2024-07-13 01:00:36.223172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.855 [2024-07-13 01:00:36.223360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.855 [2024-07-13 01:00:36.223371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.855 [2024-07-13 01:00:36.223377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.855 [2024-07-13 01:00:36.226039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1613615 Killed "${NVMF_APP[@]}" "$@" 00:35:24.855 01:00:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:24.855 01:00:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:24.855 01:00:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:24.855 [2024-07-13 01:00:36.235634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.855 01:00:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:24.855 [2024-07-13 01:00:36.235979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.855 [2024-07-13 01:00:36.235997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.855 [2024-07-13 01:00:36.236005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.855 01:00:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.855 [2024-07-13 01:00:36.236177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.855 [2024-07-13 01:00:36.236356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.855 [2024-07-13 01:00:36.236366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.855 [2024-07-13 01:00:36.236372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.855 [2024-07-13 01:00:36.239204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.855 01:00:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1614801 00:35:24.855 01:00:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1614801 00:35:24.855 01:00:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:24.855 01:00:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1614801 ']' 00:35:24.855 01:00:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.855 01:00:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:24.855 01:00:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.855 01:00:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:24.855 01:00:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:24.855 [2024-07-13 01:00:36.248727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.855 [2024-07-13 01:00:36.249162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.855 [2024-07-13 01:00:36.249178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.855 [2024-07-13 01:00:36.249186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.855 [2024-07-13 01:00:36.249368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.855 [2024-07-13 01:00:36.249546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.855 [2024-07-13 01:00:36.249556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.855 [2024-07-13 01:00:36.249565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.855 [2024-07-13 01:00:36.252393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.855 [2024-07-13 01:00:36.261912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.855 [2024-07-13 01:00:36.262280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.855 [2024-07-13 01:00:36.262299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.855 [2024-07-13 01:00:36.262306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.855 [2024-07-13 01:00:36.262484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.855 [2024-07-13 01:00:36.262663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.855 [2024-07-13 01:00:36.262673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.855 [2024-07-13 01:00:36.262680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.855 [2024-07-13 01:00:36.265512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.855 [2024-07-13 01:00:36.275022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.855 [2024-07-13 01:00:36.275368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.855 [2024-07-13 01:00:36.275385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.855 [2024-07-13 01:00:36.275392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.855 [2024-07-13 01:00:36.275584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.855 [2024-07-13 01:00:36.275762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.855 [2024-07-13 01:00:36.275772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.855 [2024-07-13 01:00:36.275778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.855 [2024-07-13 01:00:36.278584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.855 [2024-07-13 01:00:36.288017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.855 [2024-07-13 01:00:36.288426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.855 [2024-07-13 01:00:36.288445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.855 [2024-07-13 01:00:36.288453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.855 [2024-07-13 01:00:36.288632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.855 [2024-07-13 01:00:36.288814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.855 [2024-07-13 01:00:36.288824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.855 [2024-07-13 01:00:36.288830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.855 [2024-07-13 01:00:36.290154] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:24.855 [2024-07-13 01:00:36.290195] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:24.855 [2024-07-13 01:00:36.291661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.855 [2024-07-13 01:00:36.301159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.855 [2024-07-13 01:00:36.301660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.855 [2024-07-13 01:00:36.301679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.856 [2024-07-13 01:00:36.301687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.856 [2024-07-13 01:00:36.301861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.856 [2024-07-13 01:00:36.302035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.856 [2024-07-13 01:00:36.302044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.856 [2024-07-13 01:00:36.302052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.856 [2024-07-13 01:00:36.304832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.856 [2024-07-13 01:00:36.314313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.856 [2024-07-13 01:00:36.314748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.856 [2024-07-13 01:00:36.314765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.856 [2024-07-13 01:00:36.314773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.856 [2024-07-13 01:00:36.314950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.856 [2024-07-13 01:00:36.315128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.856 [2024-07-13 01:00:36.315137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.856 [2024-07-13 01:00:36.315144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.856 EAL: No free 2048 kB hugepages reported on node 1 00:35:24.856 [2024-07-13 01:00:36.317970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.856 [2024-07-13 01:00:36.327442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.856 [2024-07-13 01:00:36.327879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.856 [2024-07-13 01:00:36.327896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.856 [2024-07-13 01:00:36.327904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.856 [2024-07-13 01:00:36.328081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.856 [2024-07-13 01:00:36.328272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.856 [2024-07-13 01:00:36.328282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.856 [2024-07-13 01:00:36.328290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.856 [2024-07-13 01:00:36.331114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.856 [2024-07-13 01:00:36.340634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.856 [2024-07-13 01:00:36.341074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.856 [2024-07-13 01:00:36.341091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.856 [2024-07-13 01:00:36.341099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.856 [2024-07-13 01:00:36.341281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.856 [2024-07-13 01:00:36.341460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.856 [2024-07-13 01:00:36.341469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.856 [2024-07-13 01:00:36.341477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.856 [2024-07-13 01:00:36.344305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.856 [2024-07-13 01:00:36.353672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.856 [2024-07-13 01:00:36.354097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.856 [2024-07-13 01:00:36.354114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.856 [2024-07-13 01:00:36.354122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.856 [2024-07-13 01:00:36.354298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.856 [2024-07-13 01:00:36.354471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.856 [2024-07-13 01:00:36.354480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.856 [2024-07-13 01:00:36.354486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.856 [2024-07-13 01:00:36.357295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.856 [2024-07-13 01:00:36.363101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:24.856 [2024-07-13 01:00:36.366703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.856 [2024-07-13 01:00:36.367139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.856 [2024-07-13 01:00:36.367156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.856 [2024-07-13 01:00:36.367163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.856 [2024-07-13 01:00:36.367359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.856 [2024-07-13 01:00:36.367537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.856 [2024-07-13 01:00:36.367548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.856 [2024-07-13 01:00:36.367558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.856 [2024-07-13 01:00:36.370371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.856 [2024-07-13 01:00:36.379743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.856 [2024-07-13 01:00:36.380079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.856 [2024-07-13 01:00:36.380098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.856 [2024-07-13 01:00:36.380106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.856 [2024-07-13 01:00:36.380310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.856 [2024-07-13 01:00:36.380489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.856 [2024-07-13 01:00:36.380498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.856 [2024-07-13 01:00:36.380506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.856 [2024-07-13 01:00:36.383330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.856 [2024-07-13 01:00:36.392756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.856 [2024-07-13 01:00:36.393211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.856 [2024-07-13 01:00:36.393240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.856 [2024-07-13 01:00:36.393249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.856 [2024-07-13 01:00:36.393428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.856 [2024-07-13 01:00:36.393615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.856 [2024-07-13 01:00:36.393624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.856 [2024-07-13 01:00:36.393632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.856 [2024-07-13 01:00:36.396397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.856 [2024-07-13 01:00:36.403889] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:24.856 [2024-07-13 01:00:36.403918] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:24.857 [2024-07-13 01:00:36.403924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:24.857 [2024-07-13 01:00:36.403930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:24.857 [2024-07-13 01:00:36.403935] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:24.857 [2024-07-13 01:00:36.403991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:24.857 [2024-07-13 01:00:36.404101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.857 [2024-07-13 01:00:36.404102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:24.857 [2024-07-13 01:00:36.405805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.857 [2024-07-13 01:00:36.406250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.857 [2024-07-13 01:00:36.406271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:24.857 [2024-07-13 01:00:36.406279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:24.857 [2024-07-13 01:00:36.406464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:24.857 [2024-07-13 01:00:36.406644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.857 [2024-07-13 01:00:36.406654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.857 [2024-07-13 01:00:36.406661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.857 [2024-07-13 01:00:36.409494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.116 [2024-07-13 01:00:36.418860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.116 [2024-07-13 01:00:36.419350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.116 [2024-07-13 01:00:36.419373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.116 [2024-07-13 01:00:36.419382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.116 [2024-07-13 01:00:36.419562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.116 [2024-07-13 01:00:36.419743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.116 [2024-07-13 01:00:36.419752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.116 [2024-07-13 01:00:36.419760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.116 [2024-07-13 01:00:36.422591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.116 [2024-07-13 01:00:36.431948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.116 [2024-07-13 01:00:36.432340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.116 [2024-07-13 01:00:36.432362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.116 [2024-07-13 01:00:36.432371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.116 [2024-07-13 01:00:36.432550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.116 [2024-07-13 01:00:36.432730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.116 [2024-07-13 01:00:36.432740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.116 [2024-07-13 01:00:36.432750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.116 [2024-07-13 01:00:36.435579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.116 [2024-07-13 01:00:36.445100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.116 [2024-07-13 01:00:36.445495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.116 [2024-07-13 01:00:36.445518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.116 [2024-07-13 01:00:36.445526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.116 [2024-07-13 01:00:36.445707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.116 [2024-07-13 01:00:36.445886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.116 [2024-07-13 01:00:36.445896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.116 [2024-07-13 01:00:36.445910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.116 [2024-07-13 01:00:36.448743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.116 [2024-07-13 01:00:36.458265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.116 [2024-07-13 01:00:36.458645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.116 [2024-07-13 01:00:36.458666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.116 [2024-07-13 01:00:36.458675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.116 [2024-07-13 01:00:36.458855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.116 [2024-07-13 01:00:36.459034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.116 [2024-07-13 01:00:36.459043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.116 [2024-07-13 01:00:36.459050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.116 [2024-07-13 01:00:36.461880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.116 [2024-07-13 01:00:36.471393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.116 [2024-07-13 01:00:36.471832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.116 [2024-07-13 01:00:36.471850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.116 [2024-07-13 01:00:36.471858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.116 [2024-07-13 01:00:36.472037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.116 [2024-07-13 01:00:36.472218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.116 [2024-07-13 01:00:36.472232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.116 [2024-07-13 01:00:36.472240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.116 [2024-07-13 01:00:36.475060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.116 [2024-07-13 01:00:36.484588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.116 [2024-07-13 01:00:36.484923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.116 [2024-07-13 01:00:36.484941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.116 [2024-07-13 01:00:36.484949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.116 [2024-07-13 01:00:36.485126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.116 [2024-07-13 01:00:36.485309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.116 [2024-07-13 01:00:36.485319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.116 [2024-07-13 01:00:36.485326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.116 [2024-07-13 01:00:36.488166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.116 [2024-07-13 01:00:36.497655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.116 [2024-07-13 01:00:36.498088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.116 [2024-07-13 01:00:36.498111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.116 [2024-07-13 01:00:36.498119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.116 [2024-07-13 01:00:36.498300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.116 [2024-07-13 01:00:36.498478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.116 [2024-07-13 01:00:36.498487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.116 [2024-07-13 01:00:36.498494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.116 [2024-07-13 01:00:36.501326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.116 [2024-07-13 01:00:36.510838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.116 [2024-07-13 01:00:36.511256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.116 [2024-07-13 01:00:36.511275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.116 [2024-07-13 01:00:36.511284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.116 [2024-07-13 01:00:36.511475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.116 [2024-07-13 01:00:36.511650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.116 [2024-07-13 01:00:36.511659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.116 [2024-07-13 01:00:36.511666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.116 [2024-07-13 01:00:36.514493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.116 [2024-07-13 01:00:36.524011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.116 [2024-07-13 01:00:36.524428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.116 [2024-07-13 01:00:36.524445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.116 [2024-07-13 01:00:36.524453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.116 [2024-07-13 01:00:36.524631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.116 [2024-07-13 01:00:36.524810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.116 [2024-07-13 01:00:36.524820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.116 [2024-07-13 01:00:36.524826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.116 [2024-07-13 01:00:36.527652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.116 [2024-07-13 01:00:36.537164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.116 [2024-07-13 01:00:36.537512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.116 [2024-07-13 01:00:36.537530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.116 [2024-07-13 01:00:36.537537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.116 [2024-07-13 01:00:36.537715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.116 [2024-07-13 01:00:36.537897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.116 [2024-07-13 01:00:36.537907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.116 [2024-07-13 01:00:36.537914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.116 [2024-07-13 01:00:36.540745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.116 [2024-07-13 01:00:36.550270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.116 [2024-07-13 01:00:36.550624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.116 [2024-07-13 01:00:36.550641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.117 [2024-07-13 01:00:36.550649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.117 [2024-07-13 01:00:36.550827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.117 [2024-07-13 01:00:36.551005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.117 [2024-07-13 01:00:36.551014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.117 [2024-07-13 01:00:36.551021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.117 [2024-07-13 01:00:36.553858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.117 [2024-07-13 01:00:36.563385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.117 [2024-07-13 01:00:36.563802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.117 [2024-07-13 01:00:36.563821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.117 [2024-07-13 01:00:36.563829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.117 [2024-07-13 01:00:36.564007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.117 [2024-07-13 01:00:36.564187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.117 [2024-07-13 01:00:36.564197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.117 [2024-07-13 01:00:36.564204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.117 [2024-07-13 01:00:36.567033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.117 [2024-07-13 01:00:36.576554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.117 [2024-07-13 01:00:36.576991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.117 [2024-07-13 01:00:36.577008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.117 [2024-07-13 01:00:36.577017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.117 [2024-07-13 01:00:36.577195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.117 [2024-07-13 01:00:36.577380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.117 [2024-07-13 01:00:36.577393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.117 [2024-07-13 01:00:36.577400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.117 [2024-07-13 01:00:36.580222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.117 [2024-07-13 01:00:36.589754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.117 [2024-07-13 01:00:36.590103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.117 [2024-07-13 01:00:36.590120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.117 [2024-07-13 01:00:36.590128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.117 [2024-07-13 01:00:36.590309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.117 [2024-07-13 01:00:36.590488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.117 [2024-07-13 01:00:36.590497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.117 [2024-07-13 01:00:36.590504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.117 [2024-07-13 01:00:36.593334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.117 [2024-07-13 01:00:36.603017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.117 [2024-07-13 01:00:36.603419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.117 [2024-07-13 01:00:36.603436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.117 [2024-07-13 01:00:36.603444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.117 [2024-07-13 01:00:36.603627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.117 [2024-07-13 01:00:36.603810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.117 [2024-07-13 01:00:36.603820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.117 [2024-07-13 01:00:36.603827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.117 [2024-07-13 01:00:36.606684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.117 [2024-07-13 01:00:36.616201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.117 [2024-07-13 01:00:36.616540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.117 [2024-07-13 01:00:36.616557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.117 [2024-07-13 01:00:36.616565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.117 [2024-07-13 01:00:36.616742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.117 [2024-07-13 01:00:36.616921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.117 [2024-07-13 01:00:36.616931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.117 [2024-07-13 01:00:36.616939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.117 [2024-07-13 01:00:36.619772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.117 [2024-07-13 01:00:36.629296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.117 [2024-07-13 01:00:36.629600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.117 [2024-07-13 01:00:36.629617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.117 [2024-07-13 01:00:36.629628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.117 [2024-07-13 01:00:36.629805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.117 [2024-07-13 01:00:36.629984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.117 [2024-07-13 01:00:36.629995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.117 [2024-07-13 01:00:36.630002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.117 [2024-07-13 01:00:36.632833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.117 [2024-07-13 01:00:36.642358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.117 [2024-07-13 01:00:36.642679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.117 [2024-07-13 01:00:36.642696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.117 [2024-07-13 01:00:36.642704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.117 [2024-07-13 01:00:36.642881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.117 [2024-07-13 01:00:36.643060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.117 [2024-07-13 01:00:36.643069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.117 [2024-07-13 01:00:36.643076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.117 [2024-07-13 01:00:36.645906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.117 [2024-07-13 01:00:36.655423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.117 [2024-07-13 01:00:36.655770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.117 [2024-07-13 01:00:36.655788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.117 [2024-07-13 01:00:36.655796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.117 [2024-07-13 01:00:36.655974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.117 [2024-07-13 01:00:36.656151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.117 [2024-07-13 01:00:36.656161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.117 [2024-07-13 01:00:36.656167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.117 [2024-07-13 01:00:36.658995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.117 [2024-07-13 01:00:36.668519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.117 [2024-07-13 01:00:36.668876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.117 [2024-07-13 01:00:36.668893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.117 [2024-07-13 01:00:36.668900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.117 [2024-07-13 01:00:36.669078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.117 [2024-07-13 01:00:36.669263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.117 [2024-07-13 01:00:36.669277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.117 [2024-07-13 01:00:36.669284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.117 [2024-07-13 01:00:36.672114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.377 [2024-07-13 01:00:36.681676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.377 [2024-07-13 01:00:36.682297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.377 [2024-07-13 01:00:36.682318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.377 [2024-07-13 01:00:36.682327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.377 [2024-07-13 01:00:36.682506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.377 [2024-07-13 01:00:36.682686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.377 [2024-07-13 01:00:36.682696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.377 [2024-07-13 01:00:36.682702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.377 [2024-07-13 01:00:36.685536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.377 [2024-07-13 01:00:36.694726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.377 [2024-07-13 01:00:36.695020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.377 [2024-07-13 01:00:36.695038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.377 [2024-07-13 01:00:36.695046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.377 [2024-07-13 01:00:36.695229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.377 [2024-07-13 01:00:36.695408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.377 [2024-07-13 01:00:36.695418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.377 [2024-07-13 01:00:36.695425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.377 [2024-07-13 01:00:36.698254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.377 [2024-07-13 01:00:36.707784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.377 [2024-07-13 01:00:36.708157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.377 [2024-07-13 01:00:36.708174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.377 [2024-07-13 01:00:36.708181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.377 [2024-07-13 01:00:36.708364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.377 [2024-07-13 01:00:36.708542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.377 [2024-07-13 01:00:36.708552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.377 [2024-07-13 01:00:36.708558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.377 [2024-07-13 01:00:36.711387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.377 [2024-07-13 01:00:36.720907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.377 [2024-07-13 01:00:36.721199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.377 [2024-07-13 01:00:36.721215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.377 [2024-07-13 01:00:36.721222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.377 [2024-07-13 01:00:36.721404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.377 [2024-07-13 01:00:36.721582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.377 [2024-07-13 01:00:36.721591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.377 [2024-07-13 01:00:36.721598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.377 [2024-07-13 01:00:36.724427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.377 [2024-07-13 01:00:36.734110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.377 [2024-07-13 01:00:36.734546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.377 [2024-07-13 01:00:36.734564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.377 [2024-07-13 01:00:36.734571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.377 [2024-07-13 01:00:36.734749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.377 [2024-07-13 01:00:36.734928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.377 [2024-07-13 01:00:36.734939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.377 [2024-07-13 01:00:36.734945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.377 [2024-07-13 01:00:36.737781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.377 [2024-07-13 01:00:36.747305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.377 [2024-07-13 01:00:36.747677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.377 [2024-07-13 01:00:36.747695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.377 [2024-07-13 01:00:36.747702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.377 [2024-07-13 01:00:36.747879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.377 [2024-07-13 01:00:36.748057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.377 [2024-07-13 01:00:36.748066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.377 [2024-07-13 01:00:36.748073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.377 [2024-07-13 01:00:36.750905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.377 [2024-07-13 01:00:36.760428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.377 [2024-07-13 01:00:36.760730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.377 [2024-07-13 01:00:36.760747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.377 [2024-07-13 01:00:36.760755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.377 [2024-07-13 01:00:36.760936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.377 [2024-07-13 01:00:36.761115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.377 [2024-07-13 01:00:36.761125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.377 [2024-07-13 01:00:36.761131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.377 [2024-07-13 01:00:36.763961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.377 [2024-07-13 01:00:36.773523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.377 [2024-07-13 01:00:36.773837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.377 [2024-07-13 01:00:36.773855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.377 [2024-07-13 01:00:36.773863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.377 [2024-07-13 01:00:36.774040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.377 [2024-07-13 01:00:36.774219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.377 [2024-07-13 01:00:36.774234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.377 [2024-07-13 01:00:36.774241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.377 [2024-07-13 01:00:36.777073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.377 [2024-07-13 01:00:36.786601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.377 [2024-07-13 01:00:36.787039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.377 [2024-07-13 01:00:36.787057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.377 [2024-07-13 01:00:36.787065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.377 [2024-07-13 01:00:36.787247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.377 [2024-07-13 01:00:36.787425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.377 [2024-07-13 01:00:36.787435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.377 [2024-07-13 01:00:36.787442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.377 [2024-07-13 01:00:36.790268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.377 [2024-07-13 01:00:36.799784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.377 [2024-07-13 01:00:36.800236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.377 [2024-07-13 01:00:36.800253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.377 [2024-07-13 01:00:36.800260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.377 [2024-07-13 01:00:36.800437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.377 [2024-07-13 01:00:36.800615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.377 [2024-07-13 01:00:36.800626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.377 [2024-07-13 01:00:36.800636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.377 [2024-07-13 01:00:36.803467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.377 [2024-07-13 01:00:36.812822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.377 [2024-07-13 01:00:36.813243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.377 [2024-07-13 01:00:36.813261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.377 [2024-07-13 01:00:36.813269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.377 [2024-07-13 01:00:36.813447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.377 [2024-07-13 01:00:36.813626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.377 [2024-07-13 01:00:36.813636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.377 [2024-07-13 01:00:36.813642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.377 [2024-07-13 01:00:36.816471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.377 [2024-07-13 01:00:36.825992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.377 [2024-07-13 01:00:36.826428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.377 [2024-07-13 01:00:36.826446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.377 [2024-07-13 01:00:36.826454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.377 [2024-07-13 01:00:36.826632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.377 [2024-07-13 01:00:36.826811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.377 [2024-07-13 01:00:36.826821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.377 [2024-07-13 01:00:36.826827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.377 [2024-07-13 01:00:36.829657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.377 [2024-07-13 01:00:36.839172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.377 [2024-07-13 01:00:36.839541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.377 [2024-07-13 01:00:36.839559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.377 [2024-07-13 01:00:36.839566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.377 [2024-07-13 01:00:36.839743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.377 [2024-07-13 01:00:36.839922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.377 [2024-07-13 01:00:36.839932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.377 [2024-07-13 01:00:36.839938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.377 [2024-07-13 01:00:36.842771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.377 [2024-07-13 01:00:36.852292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.377 [2024-07-13 01:00:36.852662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.377 [2024-07-13 01:00:36.852682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.377 [2024-07-13 01:00:36.852689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.377 [2024-07-13 01:00:36.852867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.377 [2024-07-13 01:00:36.853045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.377 [2024-07-13 01:00:36.853055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.377 [2024-07-13 01:00:36.853063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.377 [2024-07-13 01:00:36.855900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.377 [2024-07-13 01:00:36.865425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.377 [2024-07-13 01:00:36.865792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.377 [2024-07-13 01:00:36.865810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.378 [2024-07-13 01:00:36.865818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.378 [2024-07-13 01:00:36.865996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.378 [2024-07-13 01:00:36.866174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.378 [2024-07-13 01:00:36.866184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.378 [2024-07-13 01:00:36.866193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.378 [2024-07-13 01:00:36.869023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.378 [2024-07-13 01:00:36.878550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.378 [2024-07-13 01:00:36.878911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.378 [2024-07-13 01:00:36.878928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.378 [2024-07-13 01:00:36.878936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.378 [2024-07-13 01:00:36.879114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.378 [2024-07-13 01:00:36.879296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.378 [2024-07-13 01:00:36.879306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.378 [2024-07-13 01:00:36.879314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.378 [2024-07-13 01:00:36.882149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.378 [2024-07-13 01:00:36.891670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.378 [2024-07-13 01:00:36.891964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.378 [2024-07-13 01:00:36.891982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.378 [2024-07-13 01:00:36.891990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.378 [2024-07-13 01:00:36.892168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.378 [2024-07-13 01:00:36.892356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.378 [2024-07-13 01:00:36.892366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.378 [2024-07-13 01:00:36.892374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.378 [2024-07-13 01:00:36.895195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.378 [2024-07-13 01:00:36.904713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.378 [2024-07-13 01:00:36.905112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.378 [2024-07-13 01:00:36.905129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.378 [2024-07-13 01:00:36.905136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.378 [2024-07-13 01:00:36.905320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.378 [2024-07-13 01:00:36.905498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.378 [2024-07-13 01:00:36.905507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.378 [2024-07-13 01:00:36.905514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.378 [2024-07-13 01:00:36.908342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.378 [2024-07-13 01:00:36.917872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.378 [2024-07-13 01:00:36.918337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.378 [2024-07-13 01:00:36.918355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.378 [2024-07-13 01:00:36.918363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.378 [2024-07-13 01:00:36.918541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.378 [2024-07-13 01:00:36.918718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.378 [2024-07-13 01:00:36.918728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.378 [2024-07-13 01:00:36.918734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.378 [2024-07-13 01:00:36.921566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.378 [2024-07-13 01:00:36.930932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.378 [2024-07-13 01:00:36.931253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.378 [2024-07-13 01:00:36.931271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.378 [2024-07-13 01:00:36.931279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.378 [2024-07-13 01:00:36.931457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.378 [2024-07-13 01:00:36.931635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.378 [2024-07-13 01:00:36.931644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.378 [2024-07-13 01:00:36.931652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.638 [2024-07-13 01:00:36.934498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.638 [2024-07-13 01:00:36.944051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.638 [2024-07-13 01:00:36.944372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-07-13 01:00:36.944391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.638 [2024-07-13 01:00:36.944400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.638 [2024-07-13 01:00:36.944579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.638 [2024-07-13 01:00:36.944758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.638 [2024-07-13 01:00:36.944768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.638 [2024-07-13 01:00:36.944774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.638 [2024-07-13 01:00:36.947601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.638 [2024-07-13 01:00:36.957115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.638 [2024-07-13 01:00:36.957479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-07-13 01:00:36.957497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.638 [2024-07-13 01:00:36.957505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.638 [2024-07-13 01:00:36.957682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.638 [2024-07-13 01:00:36.957859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.638 [2024-07-13 01:00:36.957869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.638 [2024-07-13 01:00:36.957876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.638 [2024-07-13 01:00:36.960707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.638 [2024-07-13 01:00:36.970237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.638 [2024-07-13 01:00:36.970545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-07-13 01:00:36.970562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.638 [2024-07-13 01:00:36.970570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.638 [2024-07-13 01:00:36.970747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.638 [2024-07-13 01:00:36.970926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.638 [2024-07-13 01:00:36.970935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.638 [2024-07-13 01:00:36.970942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.638 [2024-07-13 01:00:36.973773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.638 [2024-07-13 01:00:36.983306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.638 [2024-07-13 01:00:36.983603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-07-13 01:00:36.983621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.638 [2024-07-13 01:00:36.983636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.638 [2024-07-13 01:00:36.983813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.638 [2024-07-13 01:00:36.983991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.638 [2024-07-13 01:00:36.984000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.638 [2024-07-13 01:00:36.984007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.638 [2024-07-13 01:00:36.986838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.638 [2024-07-13 01:00:36.996360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.638 [2024-07-13 01:00:36.996794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-07-13 01:00:36.996811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.638 [2024-07-13 01:00:36.996818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.638 [2024-07-13 01:00:36.996996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.638 [2024-07-13 01:00:36.997174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.638 [2024-07-13 01:00:36.997183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.638 [2024-07-13 01:00:36.997190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.638 [2024-07-13 01:00:37.000017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.638 [2024-07-13 01:00:37.009531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.638 [2024-07-13 01:00:37.009879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-07-13 01:00:37.009895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.638 [2024-07-13 01:00:37.009903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.638 [2024-07-13 01:00:37.010080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.638 [2024-07-13 01:00:37.010265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.638 [2024-07-13 01:00:37.010275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.638 [2024-07-13 01:00:37.010281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.638 [2024-07-13 01:00:37.013104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.638 [2024-07-13 01:00:37.022616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.638 [2024-07-13 01:00:37.023051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-07-13 01:00:37.023068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.638 [2024-07-13 01:00:37.023075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.638 [2024-07-13 01:00:37.023258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.638 [2024-07-13 01:00:37.023437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.638 [2024-07-13 01:00:37.023452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.638 [2024-07-13 01:00:37.023459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.638 [2024-07-13 01:00:37.026285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.638 [2024-07-13 01:00:37.035794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.638 [2024-07-13 01:00:37.036167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-07-13 01:00:37.036185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.638 [2024-07-13 01:00:37.036193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.638 [2024-07-13 01:00:37.036373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.638 [2024-07-13 01:00:37.036551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.638 [2024-07-13 01:00:37.036561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.638 [2024-07-13 01:00:37.036568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.638 [2024-07-13 01:00:37.039397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.638 [2024-07-13 01:00:37.048908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.638 [2024-07-13 01:00:37.049276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-07-13 01:00:37.049294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.638 [2024-07-13 01:00:37.049301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.638 [2024-07-13 01:00:37.049479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.638 [2024-07-13 01:00:37.049658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.638 [2024-07-13 01:00:37.049668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.638 [2024-07-13 01:00:37.049675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.638 [2024-07-13 01:00:37.052504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.638 [2024-07-13 01:00:37.062021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.638 [2024-07-13 01:00:37.062488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-07-13 01:00:37.062507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.638 [2024-07-13 01:00:37.062514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.638 [2024-07-13 01:00:37.062691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.638 [2024-07-13 01:00:37.062870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.638 [2024-07-13 01:00:37.062880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.638 [2024-07-13 01:00:37.062887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.638 [2024-07-13 01:00:37.065718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.638 [2024-07-13 01:00:37.075063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.638 [2024-07-13 01:00:37.075487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-07-13 01:00:37.075504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.638 [2024-07-13 01:00:37.075511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.638 [2024-07-13 01:00:37.075688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.638 [2024-07-13 01:00:37.075867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.638 [2024-07-13 01:00:37.075877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.638 [2024-07-13 01:00:37.075883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.638 [2024-07-13 01:00:37.078710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.638 [2024-07-13 01:00:37.088222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.638 [2024-07-13 01:00:37.088660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-07-13 01:00:37.088677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.638 [2024-07-13 01:00:37.088685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.638 [2024-07-13 01:00:37.088862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.638 [2024-07-13 01:00:37.089040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.638 [2024-07-13 01:00:37.089050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.638 [2024-07-13 01:00:37.089056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.638 [2024-07-13 01:00:37.091884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.638 [2024-07-13 01:00:37.101394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.638 [2024-07-13 01:00:37.101836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.638 [2024-07-13 01:00:37.101853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.638 [2024-07-13 01:00:37.101861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.638 [2024-07-13 01:00:37.102038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.638 [2024-07-13 01:00:37.102216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.638 [2024-07-13 01:00:37.102232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.638 [2024-07-13 01:00:37.102240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:35:25.639 [2024-07-13 01:00:37.105062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:25.639 [2024-07-13 01:00:37.114583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.639 [2024-07-13 01:00:37.114884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-07-13 01:00:37.114901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.639 [2024-07-13 01:00:37.114908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.639 [2024-07-13 01:00:37.115086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.639 [2024-07-13 01:00:37.115270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.639 [2024-07-13 01:00:37.115280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.639 [2024-07-13 01:00:37.115287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.639 [2024-07-13 01:00:37.118110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.639 [2024-07-13 01:00:37.127632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.639 [2024-07-13 01:00:37.127993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-07-13 01:00:37.128011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.639 [2024-07-13 01:00:37.128018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.639 [2024-07-13 01:00:37.128196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.639 [2024-07-13 01:00:37.128381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.639 [2024-07-13 01:00:37.128392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.639 [2024-07-13 01:00:37.128399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.639 [2024-07-13 01:00:37.131227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:25.639 [2024-07-13 01:00:37.140747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.639 [2024-07-13 01:00:37.141162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-07-13 01:00:37.141180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.639 [2024-07-13 01:00:37.141187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.639 [2024-07-13 01:00:37.141370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.639 [2024-07-13 01:00:37.141548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.639 [2024-07-13 01:00:37.141557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.639 [2024-07-13 01:00:37.141564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.639 [2024-07-13 01:00:37.144388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.639 [2024-07-13 01:00:37.146722] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:25.639 [2024-07-13 01:00:37.153901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.639 [2024-07-13 01:00:37.154344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-07-13 01:00:37.154361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.639 [2024-07-13 01:00:37.154368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.639 [2024-07-13 01:00:37.154546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.639 [2024-07-13 01:00:37.154723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.639 [2024-07-13 01:00:37.154733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.639 [2024-07-13 01:00:37.154739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.639 [2024-07-13 01:00:37.157566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.639 [2024-07-13 01:00:37.167078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.639 [2024-07-13 01:00:37.167468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-07-13 01:00:37.167486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.639 [2024-07-13 01:00:37.167494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.639 [2024-07-13 01:00:37.167672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.639 [2024-07-13 01:00:37.167850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.639 [2024-07-13 01:00:37.167860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.639 [2024-07-13 01:00:37.167867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.639 [2024-07-13 01:00:37.170695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.639 [2024-07-13 01:00:37.180201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.639 [2024-07-13 01:00:37.180644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 [2024-07-13 01:00:37.180662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.639 [2024-07-13 01:00:37.180669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.639 [2024-07-13 01:00:37.180847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.639 [2024-07-13 01:00:37.181025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.639 [2024-07-13 01:00:37.181034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.639 [2024-07-13 01:00:37.181041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.639 [2024-07-13 01:00:37.183867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.639 Malloc0 00:35:25.639 [2024-07-13 01:00:37.193408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.639 [2024-07-13 01:00:37.193785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:25.639 [2024-07-13 01:00:37.193802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.639 [2024-07-13 01:00:37.193810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.639 [2024-07-13 01:00:37.193990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.639 [2024-07-13 01:00:37.194169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.639 [2024-07-13 01:00:37.194180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.639 [2024-07-13 01:00:37.194186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.639 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:25.898 [2024-07-13 01:00:37.197029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.898 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.898 01:00:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:25.898 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.898 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:25.898 [2024-07-13 01:00:37.206562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.898 [2024-07-13 01:00:37.207002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:25.898 [2024-07-13 01:00:37.207020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6152d0 with addr=10.0.0.2, port=4420 00:35:25.898 [2024-07-13 01:00:37.207028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6152d0 is same with the state(5) to be set 00:35:25.898 [2024-07-13 01:00:37.207206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6152d0 (9): Bad file descriptor 00:35:25.898 [2024-07-13 01:00:37.207392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:25.898 [2024-07-13 01:00:37.207402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:25.898 [2024-07-13 01:00:37.207409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:25.898 [2024-07-13 01:00:37.210233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:25.898 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.898 01:00:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:25.898 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.898 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:25.898 [2024-07-13 01:00:37.216626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:25.898 [2024-07-13 01:00:37.219745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:25.898 01:00:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.898 01:00:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1613873 00:35:25.898 [2024-07-13 01:00:37.249601] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:34.051 00:35:34.051 Latency(us) 00:35:34.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:34.051 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:34.051 Verification LBA range: start 0x0 length 0x4000 00:35:34.051 Nvme1n1 : 15.00 8210.07 32.07 12674.07 0.00 6108.95 644.67 17552.25 00:35:34.051 =================================================================================================================== 00:35:34.051 Total : 8210.07 32.07 12674.07 0.00 6108.95 644.67 17552.25 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:34.310 rmmod nvme_tcp 00:35:34.310 rmmod nvme_fabrics 00:35:34.310 rmmod nvme_keyring 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1614801 ']' 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1614801 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1614801 ']' 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1614801 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1614801 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1614801' 00:35:34.310 killing process with pid 1614801 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1614801 00:35:34.310 01:00:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1614801 00:35:34.569 01:00:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:34.569 01:00:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:34.569 01:00:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:34.569 01:00:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:34.569 01:00:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:34.569 01:00:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.569 01:00:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:34.569 01:00:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.106 01:00:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:37.106 00:35:37.106 real 0m25.607s 00:35:37.106 user 1m0.361s 00:35:37.106 sys 0m6.395s 00:35:37.106 01:00:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:37.106 01:00:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.106 ************************************ 00:35:37.106 END TEST nvmf_bdevperf 00:35:37.106 ************************************ 00:35:37.106 01:00:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:37.106 01:00:48 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:37.106 01:00:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:37.106 01:00:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:37.106 01:00:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:37.106 ************************************ 00:35:37.106 START TEST nvmf_target_disconnect 00:35:37.106 ************************************ 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:37.106 * Looking for test storage... 00:35:37.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:35:37.106 01:00:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:42.383 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:42.384 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:42.384 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:42.384 Found net devices under 0000:86:00.0: cvl_0_0 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:42.384 Found net devices under 0000:86:00.1: cvl_0_1 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:42.384 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:42.643 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:42.643 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:42.643 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:42.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:42.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:35:42.643 00:35:42.643 --- 10.0.0.2 ping statistics --- 00:35:42.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:42.643 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:35:42.643 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:42.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:42.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:35:42.644 00:35:42.644 --- 10.0.0.1 ping statistics --- 00:35:42.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:42.644 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:35:42.644 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:42.644 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:35:42.644 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:42.644 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:42.644 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:42.644 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:42.644 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:42.644 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:42.644 01:00:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:42.644 ************************************ 00:35:42.644 START TEST nvmf_target_disconnect_tc1 00:35:42.644 ************************************ 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:42.644 EAL: No free 2048 kB hugepages reported on node 1 00:35:42.644 [2024-07-13 01:00:54.168493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:42.644 [2024-07-13 01:00:54.168547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec9ab0 with addr=10.0.0.2, port=4420 00:35:42.644 [2024-07-13 01:00:54.168569] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:42.644 [2024-07-13 01:00:54.168582] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:42.644 [2024-07-13 01:00:54.168589] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:35:42.644 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:42.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:42.644 Initializing NVMe Controllers 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:42.644 00:35:42.644 real 0m0.115s 00:35:42.644 user 0m0.047s 00:35:42.644 sys 0m0.069s 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:42.644 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:42.644 ************************************ 00:35:42.644 END TEST nvmf_target_disconnect_tc1 00:35:42.644 ************************************ 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:42.902 ************************************ 00:35:42.902 START TEST nvmf_target_disconnect_tc2 00:35:42.902 ************************************ 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1619804 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1619804 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1619804 ']' 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:42.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:42.902 01:00:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:42.902 [2024-07-13 01:00:54.308811] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:42.902 [2024-07-13 01:00:54.308854] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:42.902 EAL: No free 2048 kB hugepages reported on node 1 00:35:42.902 [2024-07-13 01:00:54.381145] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:42.902 [2024-07-13 01:00:54.421595] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:42.902 [2024-07-13 01:00:54.421637] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:42.902 [2024-07-13 01:00:54.421644] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:42.902 [2024-07-13 01:00:54.421650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:42.902 [2024-07-13 01:00:54.421654] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:42.902 [2024-07-13 01:00:54.421784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:35:42.902 [2024-07-13 01:00:54.421891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:35:42.902 [2024-07-13 01:00:54.422001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:35:42.902 [2024-07-13 01:00:54.422002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:43.832 Malloc0 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:43.832 [2024-07-13 01:00:55.176163] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:43.832 [2024-07-13 01:00:55.208383] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1619988 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:43.832 01:00:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:43.832 EAL: No free 2048 kB hugepages reported on node 1 00:35:45.730 01:00:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1619804 00:35:45.730 01:00:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 [2024-07-13 01:00:57.235432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 [2024-07-13 01:00:57.235630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Read completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.730 Write completed with error (sct=0, sc=8) 00:35:45.730 starting I/O failed 00:35:45.731 [2024-07-13 01:00:57.235823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Write completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Write completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Write completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Write completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Write completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Write completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Write completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Write completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Write completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Write completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Write completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Write completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Read completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Write completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Write completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 Write completed with error (sct=0, sc=8) 00:35:45.731 starting I/O failed 00:35:45.731 [2024-07-13 01:00:57.236010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:45.731 [2024-07-13 01:00:57.236189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.236210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.236312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.236324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.236434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.236445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.236516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.236527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.236642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.236652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.236760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.236771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.236919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.236930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.237078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.237089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.237189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.237200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.237305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.237316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.237397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.237409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.237478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.237488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.237571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.237582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.237645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.237655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.237741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.237752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.237899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.237910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.238001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.238012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.238117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.238127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.238207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.238218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.238314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.238325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.238471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.238482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.238610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.238621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.238699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.238709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.238787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.238797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.238889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.238899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.238958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.238969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.239061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.239088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.239246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.239290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.239477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.731 [2024-07-13 01:00:57.239508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.731 qpair failed and we were unable to recover it. 00:35:45.731 [2024-07-13 01:00:57.239632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.239642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.239714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.239724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.239876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.239909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.240019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.240050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.240185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.240215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.240343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.240374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.240559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.240570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.240715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.240747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.240881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.240912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.241023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.241054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.241191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.241222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.241378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.241394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.241477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.241487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.241612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.241642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.241765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.241796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.241905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.241937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.242066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.242097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.242255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.242287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.242414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.242445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.242630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.242660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.242767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.242798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.242919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.242949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.243075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.243106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.243217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.243259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.244147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.244171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.244269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.244281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.244353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.244364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.244508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.244537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.244709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.244739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.244875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.244905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.245011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.245042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.245214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.245254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.245441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.245472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.245651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.245682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.245949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.245980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.246103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.246134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.246259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.246292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.246417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.246448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.247954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.248018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.248218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.732 [2024-07-13 01:00:57.248270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.732 qpair failed and we were unable to recover it. 00:35:45.732 [2024-07-13 01:00:57.248549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.248581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.248886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.248917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.249093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.249124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.249261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.249295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.249419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.249450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.249564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.249597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.249723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.249753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.249940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.249970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.250162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.250194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.250396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.250430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.250578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.250609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.250734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.250764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.250909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.250940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.251120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.251150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.251340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.251372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.251561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.251591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.251706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.251737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.251847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.251878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.251989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.252017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.252130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.252160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.252280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.252312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.252491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.252522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.252708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.252739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.252950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.252980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.253090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.253121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.253252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.253288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.253466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.253496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.253608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.253639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.253756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.253787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.253911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.253942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.254073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.254104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.254214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.254271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.254408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.254438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.254558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.254588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.254702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.254733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.254847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.254878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.255048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.255079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.255261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.255293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.255429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.255460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.255574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.733 [2024-07-13 01:00:57.255605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.733 qpair failed and we were unable to recover it. 00:35:45.733 [2024-07-13 01:00:57.255705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.255735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.255874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.255905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.256011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.256041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.256178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.256208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.256320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.256352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.256482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.256512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.256693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.256723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.256849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.256880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.256997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.257028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.257274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.257306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.257409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.257439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.257552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.257581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.257705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.257740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.257866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.257897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.258001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.258032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.258244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.258276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.258540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.258571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.258679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.258709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.258819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.258849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.259043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.259075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.259274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.259307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.259569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.259599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.259739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.259770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.260009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.260039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.260244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.260276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.260460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.260491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.260679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.260710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.260833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.260864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.261053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.261084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.261213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.261253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.261363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.261405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.261623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.261654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.734 qpair failed and we were unable to recover it. 00:35:45.734 [2024-07-13 01:00:57.261878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.734 [2024-07-13 01:00:57.261909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.262111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.262142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.262334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.262365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.262499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.262530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.262718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.262749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.262935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.262965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.263146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.263177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.263360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.263392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.263581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.263612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.263806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.263837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.264009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.264040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.264157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.264188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.264326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.264358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.264532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.264562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.264685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.264716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.264891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.264923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.265036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.265067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.265186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.265217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.265469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.265500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.265643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.265675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.265786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.265817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.265975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.266046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.266180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.266214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.266360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.266393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.266548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.266580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.266765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.266796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.266986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.267016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.267192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.267222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.267431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.267463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.267653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.267684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.267795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.267826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.267941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.267972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.268082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.268113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.268309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.268342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.268565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.268605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.268783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.268814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.268990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.269022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.269131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.269162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.269339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.269371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.269554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.269586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.269743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.735 [2024-07-13 01:00:57.269773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.735 qpair failed and we were unable to recover it. 00:35:45.735 [2024-07-13 01:00:57.269950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.269981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.270088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.270118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.270234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.270266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.270386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.270417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.270541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.270572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.270749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.270780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.270961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.270993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.271185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.271216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.271342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.271374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.271550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.271580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.271757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.271787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.271892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.271923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.272099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.272131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.272323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.272355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.272526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.272557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.272681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.272712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.272847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.272878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.273057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.273088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.273266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.273297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.273415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.273445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.273642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.273674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.273793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.273823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.273937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.273968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.274080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.274111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.274216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.274254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.274396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.274427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.274532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.274563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.274738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.274769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.274943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.274974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.275147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.275177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.275358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.275391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.275563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.275595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.275713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.275744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.275863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.275899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.276013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.276045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.276167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.276198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.276320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.276351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.276522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.276553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.276730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.276761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.276879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.276910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.277018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.736 [2024-07-13 01:00:57.277049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.736 qpair failed and we were unable to recover it. 00:35:45.736 [2024-07-13 01:00:57.277162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.277193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.277310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.277343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.277453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.277483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.277667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.277697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.277832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.277863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.278037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.278068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.278181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.278211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.278420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.278452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.278625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.278655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.278786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.278817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.279021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.279052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.279248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.279281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.279493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.279542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.279724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.279756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.279875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.279905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.280013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.280044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.280166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.280196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.280343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.280375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.280491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.280534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.280792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.280861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.281072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.281106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.281287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.281322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.281523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.281555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.281695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.281727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.281843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.281875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.282004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.282036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.282223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.282265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.282400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.282431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.282600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.282630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.282821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.282854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.282979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.283010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.283209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.283252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.283372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.283412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.283590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.283621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.283821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.283852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.284033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.284064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.284199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.284239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:45.737 [2024-07-13 01:00:57.284443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.737 [2024-07-13 01:00:57.284474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:45.737 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.284591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.284622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.284812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.284844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.285025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.285056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.285175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.285206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.285412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.285443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.285641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.285672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.285805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.285837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.286077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.286108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.286212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.286254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.286433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.286464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.286644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.286676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.286810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.286841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.287105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.287135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.287326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.287358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.287547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.287578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.287700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.287730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.287872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.287902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.288093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.288122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.288242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.016 [2024-07-13 01:00:57.288272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.016 qpair failed and we were unable to recover it. 00:35:46.016 [2024-07-13 01:00:57.288389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.288419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.288618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.288647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.288763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.288793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.288916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.288946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.289155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.289184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.289297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.289328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.289508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.289537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.289648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.289676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.289864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.289893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.290088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.290117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.290234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.290265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.290477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.290506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.290620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.290649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.290832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.290861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.290961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.290990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.291102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.291131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.291318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.291348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.291465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.291495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.291600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.291629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.291812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.291841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.291954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.291983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.292161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.292191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.292417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.292449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.292557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.292586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.292706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.292735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.292844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.292873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.292991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.293019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.293138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.293167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.293273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.293304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.293483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.293512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.293631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.293660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.293781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.293809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.293927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.293956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.295425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.295476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.295695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.295727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.295920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.295951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.296124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.296155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.296417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.296450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.296637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.296669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.296911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.296943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.017 qpair failed and we were unable to recover it. 00:35:46.017 [2024-07-13 01:00:57.297140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.017 [2024-07-13 01:00:57.297171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.297449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.297481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.297615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.297653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.297892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.297923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.298040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.298073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.298260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.298291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.298467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.298498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.298692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.298724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.298913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.298944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.299067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.299098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.299289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.299320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.299430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.299460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.299679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.299710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.299893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.299924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.300166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.300197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.300367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.300435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.300704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.300738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.300943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.300974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.301163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.301195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.301475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.301508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.301634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.301664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.301780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.301811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.301988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.302019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.302235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.302267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.302405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.302436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.303863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.303914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.304204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.304251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.304442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.304473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.306286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.306342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.306666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.306700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.306886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.306918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.307057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.307089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.307240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.307272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.307488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.307519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.307788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.307819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.307918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.307949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.308125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.308155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.308290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.308323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.308443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.308472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.308605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.308636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.308874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.018 [2024-07-13 01:00:57.308905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.018 qpair failed and we were unable to recover it. 00:35:46.018 [2024-07-13 01:00:57.309146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.309177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.309426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.309463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.309590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.309620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.309797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.309827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.309937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.309969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.310206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.310245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.310429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.310459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.310583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.310612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.310736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.310767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.310960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.310991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.311166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.311197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.311385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.311416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.311542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.311573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.311799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.311830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.311958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.311987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.312246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.312279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.312523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.312554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.312726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.312755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.312962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.312993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.313240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.313271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.313401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.313433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.313541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.313571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.313706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.313737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.313860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.313889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.314028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.314059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.314244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.314275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.314462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.314493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.314693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.314723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.314985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.315017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.315129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.315159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.315292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.315322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.316291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.316340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.316548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.316579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.316690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.316721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.316914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.316946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.317146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.317178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.317314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.317347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.317469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.317499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.317673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.317704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.317819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.317849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.019 [2024-07-13 01:00:57.317965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.019 [2024-07-13 01:00:57.317994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.019 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.318108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.318143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.318320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.318353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.319665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.319713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.319961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.319994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.320247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.320279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.320393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.320423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.320536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.320565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.320734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.320765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.320897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.320928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.321052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.321082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.321266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.321298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.321493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.321524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.321712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.321744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.321940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.321971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.322221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.322262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.322536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.322566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.322763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.322794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.322922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.322952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.323287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.323322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.323508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.323539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.323749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.323780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.323895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.323926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.324106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.324137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.324380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.324412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.324543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.324572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.324813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.324843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.324974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.325004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.325190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.325220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.325432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.325462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.325728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.325758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.325942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.325972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.326118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.326149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.326293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.020 [2024-07-13 01:00:57.326325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.020 qpair failed and we were unable to recover it. 00:35:46.020 [2024-07-13 01:00:57.326449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.326480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.326743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.326774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.326963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.326994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.327189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.327220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.327343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.327372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.327489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.327518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.327756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.327787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.327971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.328007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.328213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.328254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.328462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.328492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.328632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.328664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.328849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.328880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.329003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.329032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.329161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.329189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.329384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.329416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.329542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.329571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.329759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.329790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.329921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.329951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.330148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.330180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.330384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.330415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.330550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.330582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.330773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.330805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.331012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.331043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.331250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.331282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.331411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.331442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.331602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.331632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.331740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.331770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.331885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.331922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.332083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.332125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.332330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.332376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.332529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.332573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.332794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.332843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.333051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.333099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.333253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.333300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.333508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.333549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.333768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.333811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.334077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.334112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.334301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.334334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.334442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.334473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.334607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.334639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.334915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.021 [2024-07-13 01:00:57.334947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.021 qpair failed and we were unable to recover it. 00:35:46.021 [2024-07-13 01:00:57.335220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.335266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.335539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.335570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.335692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.335722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.335903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.335935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.336067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.336096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.336289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.336323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.336459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.336498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.336680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.336711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.336930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.336960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.337089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.337119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.337359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.337391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.337594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.337625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.337748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.337779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.337961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.337991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.338127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.338158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.338355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.338386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.338567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.338594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.338860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.338890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.339013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.339042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.339172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.339201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.339408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.339439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.339610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.339640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.339771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.339800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.339977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.340007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.340123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.340154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.340333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.340364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.340625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.340655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.340764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.340792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.340964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.340994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.341171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.341201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.341338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.341368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.341555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.341585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.341711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.341741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.341938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.341969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.342235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.342268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.342458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.342488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.342594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.342623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.342821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.342851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.343144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.343175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.343496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.343526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.343661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.022 [2024-07-13 01:00:57.343691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.022 qpair failed and we were unable to recover it. 00:35:46.022 [2024-07-13 01:00:57.343820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.343848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.344046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.344076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.344285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.344319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.344458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.344487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.344592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.344621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.344826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.344861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.345129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.345160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.345341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.345372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.345567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.345596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.345774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.345803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.345942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.345972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.346207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.346250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.346358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.346387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.346573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.346602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.346735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.346765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.347006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.347035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.347246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.347277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.347490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.347520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.347646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.347674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.347868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.347897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.348082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.348112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.348240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.348273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.348444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.348473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.348681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.348709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.348955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.348983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.349108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.349137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.349286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.349319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.349492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.349520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.349801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.349831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.349958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.349987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.350262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.350293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.350539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.350570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.350774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.350803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.351024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.351054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.351184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.351212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.351346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.351377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.351495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.351525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.351783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.351813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.351927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.351955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.352223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.352268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.023 [2024-07-13 01:00:57.352401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.023 [2024-07-13 01:00:57.352430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.023 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.352561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.352591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.352784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.352813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.353094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.353124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.353362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.353393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.353534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.353570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.353754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.353783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.354053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.354083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.354216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.354267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.354475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.354505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.354744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.354774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.354959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.354987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.355114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.355144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.355275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.355305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.355478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.355506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.355611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.355640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.355771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.355800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.355976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.356005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.356248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.356279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.356425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.356455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.356646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.356676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.356917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.356948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.357140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.357170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.357286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.357316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.357493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.357523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.357640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.357671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.357793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.357821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.358008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.358037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.358159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.358189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.358339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.358370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.358558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.358588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.358718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.358747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.358922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.358952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.359132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.359161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.359269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.359298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.359402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.359430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.359611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.359640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.359910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.359941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.360123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.360153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.360265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.360296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.360472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.360501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.024 qpair failed and we were unable to recover it. 00:35:46.024 [2024-07-13 01:00:57.360772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.024 [2024-07-13 01:00:57.360803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.360937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.360965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.361079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.361109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.361238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.361267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.361446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.361485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.361663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.361693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.361817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.361846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.362053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.362083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.362259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.362289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.362465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.362494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.362625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.362655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.362834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.362863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.363064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.363094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.363199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.363236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.363370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.363400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.363531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.363559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.363688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.363717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.363834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.363863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.363988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.364017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.364186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.364216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.364437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.364468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.364590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.364619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.364806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.364834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.364940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.364970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.365091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.365120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.365260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.365290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.365434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.365464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.365642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.365671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.365792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.365821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.365938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.365968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.366150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.366179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.366305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.366335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.366545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.366575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.366785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.366815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.366931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.366960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.367063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.025 [2024-07-13 01:00:57.367091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.025 qpair failed and we were unable to recover it. 00:35:46.025 [2024-07-13 01:00:57.367268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.367299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.367439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.367468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.367645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.367674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.367778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.367808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.367912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.367946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.368192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.368222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.368350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.368382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.368570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.368599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.368793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.368829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.368938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.368967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.369142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.369170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.369278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.369308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.369494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.369524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.369650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.369680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.369857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.369886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.370067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.370097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.370218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.370267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.370459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.370489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.370624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.370653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.370825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.370854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.370985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.371015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.371121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.371150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.371422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.371454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.371562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.371592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.371738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.371767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.371894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.371923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.372029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.372059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.372258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.372287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.372397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.372427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.372625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.372654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.372769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.372797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.372909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.372938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.373055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.373086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.373207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.373246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.373441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.373471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.373592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.373622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.373805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.373835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.373947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.373976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.374149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.374179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.374359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.374390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.374501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.026 [2024-07-13 01:00:57.374530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.026 qpair failed and we were unable to recover it. 00:35:46.026 [2024-07-13 01:00:57.374651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.374681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.374814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.374843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.374946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.374974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.375083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.375113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.375305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.375334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.375456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.375485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.375659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.375694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.375815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.375850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.375961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.375989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.376183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.376212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.376350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.376379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.376505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.376535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.376723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.376753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.376859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.376887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.376995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.377023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.377134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.377164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.377272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.377301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.377482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.377512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.377623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.377652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.377841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.377870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.377975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.378005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.378130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.378159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.378332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.378363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.378489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.378520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.378655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.378683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.378789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.378819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.378990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.379022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.379148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.379177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.379388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.379420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.379546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.379575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.379789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.379819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.379934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.379963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.380143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.380172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.380365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.380396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.380569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.380599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.380709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.380738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.380904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.380934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.381100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.381130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.381262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.381293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.381421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.381451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.027 [2024-07-13 01:00:57.381579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.027 [2024-07-13 01:00:57.381609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.027 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.381846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.381875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.382059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.382089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.382276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.382307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.382486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.382515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.382699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.382729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.382852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.382882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.383064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.383099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.383275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.383306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.383441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.383471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.383585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.383615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.383752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.383782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.383953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.383982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.384233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.384264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.384441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.384470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.384658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.384689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.384864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.384894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.385004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.385033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.385162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.385192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.385386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.385417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.385605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.385634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.385761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.385791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.385900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.385930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.386112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.386143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.386327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.386358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.386536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.386566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.386667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.386696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.386875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.386904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.387023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.387052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.387249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.387279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.387471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.387500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.387631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.387661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.387797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.387826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.388031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.388060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.388309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.388341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.388474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.388504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.388699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.388729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.388914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.388943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.389120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.389151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.389277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.389308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.389552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.389582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.028 [2024-07-13 01:00:57.389715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.028 [2024-07-13 01:00:57.389744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.028 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.389950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.389980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.390178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.390208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.390419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.390449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.390578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.390607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.390808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.390838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.391085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.391120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.391299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.391329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.391501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.391530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.391644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.391674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.391796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.391827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.392077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.392106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.392240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.392271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.392389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.392418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.392592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.392622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.392791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.392821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.392953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.392982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.393158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.393188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.393339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.393369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.393487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.393516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.393641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.393670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.393792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.393821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.393932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.393963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.394101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.394131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.394308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.394338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.394527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.394557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.394800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.394830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.395025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.395054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.395167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.395196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.395399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.395431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.395552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.395583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.395764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.395796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.395909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.395937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.396051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.396080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.396279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.396309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.029 [2024-07-13 01:00:57.396429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.029 [2024-07-13 01:00:57.396457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.029 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.396636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.396665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.396783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.396812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.396950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.396979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.397151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.397180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.397369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.397400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.397579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.397607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.397730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.397759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.397935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.397966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.398147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.398177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.398309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.398340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.398474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.398505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.398630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.398659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.398771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.398801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.398914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.398943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.399149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.399179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.399383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.399415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.399656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.399686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.399825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.399855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.399975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.400005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.400267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.400299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.400412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.400441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.400623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.400654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.400851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.400881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.401090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.401119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.401310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.401341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.401475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.401503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.401702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.401731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.401847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.401876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.402028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.402056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.402238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.402273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.402401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.402431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.402610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.402640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.402824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.402853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.403103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.403132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.403254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.403284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.403462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.403491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.403623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.403653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.403824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.403858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.404100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.404129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.404255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.404284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.030 qpair failed and we were unable to recover it. 00:35:46.030 [2024-07-13 01:00:57.404394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.030 [2024-07-13 01:00:57.404422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.404596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.404625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.404771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.404801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.404984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.405017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.405152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.405181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.405308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.405338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.405464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.405495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.405592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.405620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.405725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.405754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.405861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.405890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.406002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.406034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.406246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.406277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.406391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.406422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.406544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.406575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.406871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.406901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.407076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.407106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.407301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.407333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.407519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.407548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.407686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.407716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.407842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.407872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.407997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.408025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.408139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.408167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.408369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.408401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.408512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.408540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.408667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.408696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.408819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.408848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.409029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.409059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.409255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.409285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.409464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.409493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.409674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.409705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.409839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.409870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.410038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.410067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.410267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.410296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.410424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.410453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.410581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.410609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.410725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.410753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.410856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.410886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.411073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.411107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.411216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.411275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.411394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.411424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.411597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.031 [2024-07-13 01:00:57.411625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.031 qpair failed and we were unable to recover it. 00:35:46.031 [2024-07-13 01:00:57.411798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.411826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.411947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.411977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.412096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.412124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.412243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.412272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.412514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.412545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.412727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.412757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.412875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.412904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.413100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.413128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.413248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.413279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.413403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.413432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.413611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.413640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.413822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.413852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.413967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.413997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.414119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.414149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.414324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.414356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.414473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.414503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.414713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.414742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.414863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.414892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.415010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.415040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.415144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.415172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.415302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.415334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.415449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.415478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.415717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.415745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.415952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.415981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.416089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.416118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.416326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.416358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.416469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.416497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.416614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.416642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.416754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.416783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.416979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.417008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.417120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.417150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.417279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.417309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.417577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.417607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.417804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.417834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.417948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.417976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.418099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.418128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.418372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.418408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.418525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.418554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.418761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.418791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.418897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.418926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.419045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.032 [2024-07-13 01:00:57.419073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.032 qpair failed and we were unable to recover it. 00:35:46.032 [2024-07-13 01:00:57.419196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.419250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.419377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.419407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.419596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.419625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.419742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.419771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.419964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.419994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.420173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.420202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.420386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.420415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.420586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.420615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.420791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.420822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.420958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.420988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.421094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.421122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.421239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.421270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.421393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.421422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.421552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.421580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.421684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.421712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.421884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.421913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.422042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.422070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.422259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.422291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.422408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.422435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.422608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.422637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.422833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.422863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.422967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.422995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.423108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.423137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.423262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.423293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.423404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.423434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.423679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.423710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.423842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.423871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.424000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.424029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.424146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.424175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.424302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.424333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.424575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.424604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.424812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.424844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.424968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.424996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.425167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.425197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.425309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.425341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.033 [2024-07-13 01:00:57.425455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.033 [2024-07-13 01:00:57.425489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.033 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.425661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.425691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.425927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.425957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.426140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.426169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.426365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.426396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.426607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.426637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.426768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.426797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.426913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.426943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.427124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.427155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.427280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.427310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.427499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.427530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.427704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.427735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.427912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.427943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.428050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.428079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.428188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.428218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.428357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.428386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.428589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.428619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.428763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.428791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.428995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.429025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.429197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.429234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.429416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.429449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.429574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.429602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.429735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.429766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.429876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.429905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.430030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.430060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.430178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.430207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.430338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.430367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.430492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.430522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.430712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.430742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.430913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.430942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.431065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.431093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.431278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.431309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.431425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.431454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.431613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.431642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.431756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.431787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.431896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.431924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.432043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.432073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.432190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.432219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.432338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.432368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.432496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.432524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.432642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.432677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.034 [2024-07-13 01:00:57.432783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.034 [2024-07-13 01:00:57.432812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.034 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.432957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.432987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.433102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.433131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.433275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.433306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.433486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.433516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.433686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.433715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.433887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.433916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.434028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.434057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.434164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.434193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.434326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.434355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.434538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.434568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.434676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.434705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.434814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.434842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.434972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.435001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.435123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.435151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.435333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.435364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.435474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.435503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.435630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.435658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.435780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.435809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.435931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.435961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.436142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.436171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.436307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.436338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.436443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.436471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.436646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.436677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.436855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.436884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.437134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.437163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.437297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.437328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.437439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.437468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.437591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.437620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.437739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.437767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.437874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.437903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.438012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.438043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.438164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.438191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.438393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.438425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.438612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.438642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.438773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.438803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.438984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.439014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.439136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.439166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.439303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.439330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.439435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.439465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.439574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.439600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.035 qpair failed and we were unable to recover it. 00:35:46.035 [2024-07-13 01:00:57.439729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.035 [2024-07-13 01:00:57.439757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.439927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.439954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.440051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.440077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.440181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.440207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.440389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.440416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.440550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.440577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.440764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.440791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.440902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.440928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.441049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.441076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.441188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.441215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.441332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.441358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.441543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.441570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.441696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.441725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.441850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.441876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.442058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.442086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.442262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.442290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.442482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.442509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.442680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.442706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.442811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.442837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.443044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.443071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.443182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.443209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.443390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.443416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.443529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.443556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.443683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.443709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.443920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.443950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.444139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.444169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.444289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.444321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.444439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.444468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.444577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.444609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.444735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.444763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.444869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.444897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.445021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.445052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.445252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.445279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.445531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.445558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.445674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.445700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.445805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.445832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.446004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.446030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.446144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.446173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.446293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.446325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.446444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.446470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.446584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.446610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.036 qpair failed and we were unable to recover it. 00:35:46.036 [2024-07-13 01:00:57.446798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.036 [2024-07-13 01:00:57.446824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.447003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.447029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.447143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.447170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.447284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.447311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.447426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.447456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.447556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.447581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.447689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.447715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.447878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.447906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.448085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.448111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.448281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.448309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.448489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.448519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.448643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.448672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.448780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.448808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.448933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.448964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.449084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.449110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.449213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.449250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.449366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.449391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.449500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.449529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.449703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.449733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.449915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.449944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.450058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.450086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.450268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.450298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.450475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.450505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.450623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.450652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.450899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.450930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.451041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.451070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.451182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.451211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.451336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.451367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.451544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.451574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.451700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.451729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.451905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.451935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.452048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.452078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.452194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.452223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.452443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.452473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.452580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.452609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.452783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.037 [2024-07-13 01:00:57.452813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.037 qpair failed and we were unable to recover it. 00:35:46.037 [2024-07-13 01:00:57.452943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.452972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.453086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.453121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.453248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.453278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.453398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.453425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.453531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.453559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.453680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.453710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.453833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.453863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.454059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.454087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.454194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.454222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.454355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.454386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.454492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.454520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.454774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.454806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.454917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.454947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.455066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.455096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.455215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.455256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.455443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.455474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.455593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.455621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.455798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.455827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.455936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.455966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.456087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.456116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.456383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.456412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.456529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.456558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.456736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.456766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.456977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.457007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.457113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.457143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.457284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.457314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.457429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.457459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.457634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.457662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.457843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.457872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.458001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.458032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.458150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.458178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.458325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.458354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.458470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.458501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.458719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.458749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.458936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.458965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.459086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.459114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.459242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.459273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.459468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.459498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.459622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.459651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.459763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.459792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.038 [2024-07-13 01:00:57.459921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.038 [2024-07-13 01:00:57.459949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.038 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.460127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.460162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.460394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.460425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.460549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.460579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.460754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.460784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.460971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.461001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.461175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.461205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.461328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.461358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.461474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.461503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.461689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.461719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.461831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.461861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.461984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.462013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.462213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.462254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.462370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.462400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.462508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.462536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.462666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.462696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.462802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.462830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.462944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.462974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.463095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.463123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.463237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.463268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.463458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.463488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.463667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.463697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.463872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.463903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.464005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.464033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.464243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.464274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.464384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.464416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.464522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.464550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.464739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.464768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.464884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.464913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.465011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.465039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.465207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.465247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.465368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.465396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.465508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.465535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.465718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.465748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.465849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.465883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.465996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.466024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.466150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.466178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.466293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.466322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.466493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.466525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.466766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.466795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.466968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.466998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.039 [2024-07-13 01:00:57.467172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.039 [2024-07-13 01:00:57.467206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.039 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.467368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.467400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.467520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.467550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.467660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.467688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.467796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.467824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.467938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.467967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.468076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.468105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.468237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.468268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.468401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.468430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.468556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.468587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.468705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.468737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.468847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.468875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.469052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.469081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.469327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.469358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.469476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.469507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.469699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.469741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.469842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.469873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.469985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.470013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.470125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.470152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.470261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.470289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.470466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.470493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.470597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.470625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.470733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.470759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.470929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.470956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.471057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.471083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.471273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.471300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.471410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.471435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.471565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.471595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.471772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.471799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.471914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.471942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.472070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.472097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.472287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.472316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.472428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.472454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.472624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.472651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.472764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.472790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.472889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.472916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.473091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.473118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.473292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.473320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.473456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.473482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.473605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.473632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.473734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.473764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.040 qpair failed and we were unable to recover it. 00:35:46.040 [2024-07-13 01:00:57.473887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.040 [2024-07-13 01:00:57.473915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.474020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.474046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.474209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.474246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.474341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.474368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.474474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.474503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.474612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.474638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.474808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.474835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.475002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.475028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.475274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.475304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.475410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.475437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.475636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.475663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.475793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.475820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.475939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.475966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.476206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.476256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.476374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.476401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.476522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.476549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.476720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.476747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.476950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.476977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.477092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.477118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.477244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.477271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.477446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.477474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.477587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.477614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.477747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.477773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.477957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.477986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.478199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.478238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.478344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.478373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.478498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.478528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.478716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.478745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.478871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.478902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.479174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.479203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.479319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.479349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.479486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.479515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.479762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.479792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.479963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.479992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.480128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.480158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.480275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.480306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.480429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.480459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.480594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.480622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.480742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.480771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.480884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.480919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.481099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.041 [2024-07-13 01:00:57.481129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.041 qpair failed and we were unable to recover it. 00:35:46.041 [2024-07-13 01:00:57.481311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.481341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.481508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.481537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.481721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.481750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.481862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.481892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.482065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.482094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.482275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.482305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.482410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.482438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.482560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.482590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.482704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.482733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.482916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.482946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.483051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.483080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.483293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.483324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.483449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.483479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.483628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.483656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.483892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.483920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.484093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.484123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.484246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.484276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.484538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.484567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.484685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.484714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.484825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.484858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.484977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.485005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.485173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.485201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.485405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.485436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.485544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.485573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.485689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.485717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.485979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.486009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.486121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.486152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.486273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.486304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.486526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.486556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.486748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.486777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.486945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.486974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.487108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.487144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.487287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.487319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.487522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.487551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.487676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.487705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.042 [2024-07-13 01:00:57.487826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.042 [2024-07-13 01:00:57.487855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.042 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.487982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.488013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.488296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.488327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.488618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.488647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.488787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.488816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.489010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.489039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.489151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.489180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.489338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.489368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.489565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.489596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.489715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.489744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.489865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.489894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.490018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.490047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.490158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.490186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.490313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.490346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.490465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.490492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.490606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.490633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.490898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.490928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.491030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.491059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.491238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.491268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.491399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.491433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.491575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.491610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.491741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.491778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.491905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.491933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.492115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.492146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.492274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.492306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.492412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.492440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.492655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.492689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.492935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.492973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.493165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.493195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.493319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.493354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.493604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.493646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.493901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.493932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.494120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.494151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.494283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.494315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.494439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.494467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.494611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.494642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.494745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.494775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.494952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.494983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.495178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.495209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.495441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.495472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.495647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.495679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.043 [2024-07-13 01:00:57.495797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.043 [2024-07-13 01:00:57.495826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.043 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.495961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.495993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.496216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.496258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.496450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.496480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.496655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.496683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.496806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.496836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.496959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.496987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.497182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.497211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.497347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.497375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.497584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.497615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.497748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.497778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.497916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.497944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.498051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.498081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.498213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.498251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.498433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.498462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.498637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.498666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.498844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.498874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.499019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.499049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.499237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.499267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.499367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.499394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.499501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.499529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.499720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.499748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.499866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.499896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.500170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.500199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.500328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.500358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.500484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.500513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.500633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.500662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.500838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.500868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.500989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.501017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.501125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.501161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.501272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.501302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.501417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.501446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.501552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.501580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.501723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.501752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.501870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.501898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.501997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.502025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.502272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.502302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.502504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.502534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.502640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.502668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.502857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.502886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.503012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.503041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.044 qpair failed and we were unable to recover it. 00:35:46.044 [2024-07-13 01:00:57.503165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.044 [2024-07-13 01:00:57.503195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.503380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.503409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.503609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.503638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.503817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.503847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.504050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.504080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.504205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.504242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.504429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.504457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.504697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.504725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.505019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.505048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.505152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.505179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.505306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.505337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.505518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.505547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.505755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.505785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.505967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.505995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.506178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.506206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.506415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.506443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.506561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.506591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.506775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.506804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.506919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.506947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.507049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.507077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.507205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.507247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.507422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.507452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.507587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.507616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.507813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.507843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.507950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.507981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.508192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.508221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.508360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.508391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.508505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.508534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.508708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.508744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.508863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.508891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.509084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.509113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.509217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.509258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.509371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.509401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.509510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.509539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.509754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.509784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.509907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.509936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.510177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.510206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.510342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.510370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.510480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.510510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.510711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.510740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.510845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.510876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.045 qpair failed and we were unable to recover it. 00:35:46.045 [2024-07-13 01:00:57.511065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.045 [2024-07-13 01:00:57.511094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.511235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.511266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.511365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.511393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.511574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.511603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.511782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.511811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.511947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.511978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.512150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.512178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.512439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.512469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.512578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.512605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.512792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.512822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.512932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.512960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.513178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.513206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.513414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.513443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.513700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.513728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.513909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.513937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.514067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.514096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.514209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.514246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.514439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.514468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.514643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.514671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.514859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.514888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.514996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.515024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.515261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.515290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.515472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.515501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.515616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.515646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.515778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.515806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.515931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.515960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.516142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.516171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.516464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.516499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.516702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.516731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.516978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.517008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.517203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.517240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.517369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.517398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.517583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.517611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.517740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.517769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.517968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.517997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.046 qpair failed and we were unable to recover it. 00:35:46.046 [2024-07-13 01:00:57.518185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.046 [2024-07-13 01:00:57.518214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.518403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.518433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.518639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.518669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.518780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.518808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.518984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.519013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.519203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.519243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.519505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.519535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.519669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.519698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.519884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.519913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.520086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.520114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.520313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.520344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.520518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.520548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.520664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.520693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.520815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.520844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.520952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.520982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.521182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.521210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.521364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.521394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.521580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.521610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.521822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.521851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.522031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.522060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.522245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.522276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.522461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.522488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.522593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.522622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.522894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.522923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.523141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.523170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.523347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.523377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.523493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.523521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.523623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.523652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.523830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.523860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.523979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.524007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.524276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.524307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.524495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.524525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.524769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.524804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.524942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.524970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.525237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.525267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.525378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.525407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.525673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.525702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.525811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.525840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.526026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.526055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.526244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.526274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.526468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.047 [2024-07-13 01:00:57.526498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.047 qpair failed and we were unable to recover it. 00:35:46.047 [2024-07-13 01:00:57.526624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.526651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.526829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.526857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.527026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.527055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.527238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.527268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.527452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.527482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.527679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.527708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.527944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.527972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.528152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.528180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.528311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.528339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.528476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.528504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.528606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.528635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.528828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.528859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.529045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.529074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.529251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.529282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.529460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.529489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.529694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.529724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.529960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.529989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.530173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.530201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.530494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.530524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.530700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.530729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.530830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.530857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.530973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.531002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.531119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.531148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.531370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.531402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.531669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.531698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.531894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.531923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.532055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.532084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.532193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.532223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.532351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.532379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.532556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.532588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.532708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.532734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.532932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.532966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.533252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.533283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.533413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.533443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.533684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.533713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.533900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.533929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.534253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.534284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.534480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.534510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.534747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.534776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.534963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.534993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.535269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.048 [2024-07-13 01:00:57.535298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.048 qpair failed and we were unable to recover it. 00:35:46.048 [2024-07-13 01:00:57.535558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.535588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.535767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.535795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.535981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.536011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.536125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.536154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.536344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.536374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.536562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.536591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.536856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.536885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.537017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.537046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.537252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.537282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.537490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.537519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.537703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.537733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.537919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.537948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.538068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.538097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.538222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.538268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.538389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.538419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.538549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.538578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.538753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.538781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.538973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.539002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.539114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.539145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.539412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.539443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.539693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.539722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.539848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.539877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.540069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.540098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.540238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.540268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.540447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.540477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.540737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.540765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.541042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.541071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.541267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.541297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.541441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.541471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.541567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.541594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.541764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.541799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.541934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.541963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.542139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.542169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.542296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.542327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.542565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.542594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.542788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.542817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.543084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.543114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.543219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.543256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.543458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.543487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.543670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.543699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.049 [2024-07-13 01:00:57.543962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.049 [2024-07-13 01:00:57.543992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.049 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.544104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.544132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.544375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.544405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.544533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.544563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.544701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.544730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.544871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.544900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.545096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.545125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.545391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.545421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.545596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.545626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.545815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.545843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.546017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.546046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.546151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.546180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.546461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.546492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.546616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.546645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.546832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.546861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.547039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.547067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.547367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.547397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.547602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.547631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.547816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.547845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.548035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.548064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.548190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.548220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.548337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.548366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.548551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.548580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.548702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.548731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.548905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.548935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.549107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.549136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.549399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.549429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.549675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.549704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.549905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.549935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.550146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.550175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.550378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.550413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.550594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.550633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.550953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.550990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.551244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.551275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.551462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.551493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.551713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.551748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.050 [2024-07-13 01:00:57.551871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.050 [2024-07-13 01:00:57.551901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.050 qpair failed and we were unable to recover it. 00:35:46.051 [2024-07-13 01:00:57.552015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.051 [2024-07-13 01:00:57.552044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.051 qpair failed and we were unable to recover it. 00:35:46.051 [2024-07-13 01:00:57.552237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.051 [2024-07-13 01:00:57.552268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.051 qpair failed and we were unable to recover it. 00:35:46.051 [2024-07-13 01:00:57.552386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.051 [2024-07-13 01:00:57.552414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.051 qpair failed and we were unable to recover it. 00:35:46.051 [2024-07-13 01:00:57.552707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.051 [2024-07-13 01:00:57.552747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.051 qpair failed and we were unable to recover it. 00:35:46.051 [2024-07-13 01:00:57.552960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.051 [2024-07-13 01:00:57.553005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.051 qpair failed and we were unable to recover it. 00:35:46.051 [2024-07-13 01:00:57.553199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.051 [2024-07-13 01:00:57.553244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.051 qpair failed and we were unable to recover it. 00:35:46.051 [2024-07-13 01:00:57.553441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.051 [2024-07-13 01:00:57.553471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.051 qpair failed and we were unable to recover it. 00:35:46.051 [2024-07-13 01:00:57.553668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.051 [2024-07-13 01:00:57.553704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.051 qpair failed and we were unable to recover it. 00:35:46.051 [2024-07-13 01:00:57.553847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.051 [2024-07-13 01:00:57.553877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.051 qpair failed and we were unable to recover it. 00:35:46.051 [2024-07-13 01:00:57.554056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.051 [2024-07-13 01:00:57.554084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.051 qpair failed and we were unable to recover it. 00:35:46.051 [2024-07-13 01:00:57.554345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.051 [2024-07-13 01:00:57.554375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.051 qpair failed and we were unable to recover it. 00:35:46.051 [2024-07-13 01:00:57.554570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.051 [2024-07-13 01:00:57.554599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.051 qpair failed and we were unable to recover it. 00:35:46.051 [2024-07-13 01:00:57.554712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.051 [2024-07-13 01:00:57.554752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.051 qpair failed and we were unable to recover it. 00:35:46.051 [2024-07-13 01:00:57.554962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.051 [2024-07-13 01:00:57.554998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.051 qpair failed and we were unable to recover it. 00:35:46.051 [2024-07-13 01:00:57.555185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.051 [2024-07-13 01:00:57.555215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.051 qpair failed and we were unable to recover it. 00:35:46.329 [2024-07-13 01:00:57.555514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-07-13 01:00:57.555546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-07-13 01:00:57.555775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-07-13 01:00:57.555804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-07-13 01:00:57.555994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-07-13 01:00:57.556024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-07-13 01:00:57.556139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-07-13 01:00:57.556168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.329 [2024-07-13 01:00:57.556311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.329 [2024-07-13 01:00:57.556354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.329 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.556582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.556629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.556760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.556799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.556996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.557041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.557213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.557274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.557494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.557535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.557739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.557782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.557996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.558040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.558300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.558347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.558560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.558600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.558815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.558857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.559062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.559106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.559258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.559303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.559453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.559494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.559724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.559778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.559993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.560031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.560255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.560298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.560535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.560578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.560738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.560781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.560913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.560956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.561184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.561215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.561364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.561394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.561655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.561684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.561803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.561832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.562084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.562113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.562372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.562403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.562527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.562555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.562729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.562759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.563011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.563041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.563215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.563255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.563432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.563461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.563701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.563731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.563925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.563954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.564144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.330 [2024-07-13 01:00:57.564172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.330 qpair failed and we were unable to recover it. 00:35:46.330 [2024-07-13 01:00:57.564361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.564390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.564575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.564605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.564839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.564869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.565000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.565030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.565290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.565321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.565507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.565536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.565732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.565762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.565941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.565970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.566140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.566169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.566380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.566411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.566667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.566696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.566821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.566850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.567037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.567065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.567186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.567215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.567440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.567471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.567594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.567623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.567859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.567894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.568175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.568207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.568350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.568380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.568630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.568659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.568868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.568903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.569160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.569190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.569322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.569356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.569546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.569573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.569773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.569803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.569993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.570023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.570202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.570241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.570480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.570509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.570701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.570731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.570849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.570878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.331 qpair failed and we were unable to recover it. 00:35:46.331 [2024-07-13 01:00:57.571077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.331 [2024-07-13 01:00:57.571106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.571286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.571318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.571509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.571539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.571731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.571760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.571884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.571913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.572192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.572221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.572408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.572438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.572554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.572583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.572694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.572723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.572982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.573011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.573139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.573169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.573375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.573406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.573654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.573684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.573858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.573887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.574067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.574097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.574210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.574261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.574443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.574472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.574659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.574689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.574817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.574846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.574964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.574994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.575182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.575211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.575399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.575428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.575626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.575656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.575783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.575812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.576050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.576079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.576319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.576349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.576544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.576573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.576745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.576774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.576902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.576931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.577120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.577149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.577357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.332 [2024-07-13 01:00:57.577393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.332 qpair failed and we were unable to recover it. 00:35:46.332 [2024-07-13 01:00:57.577513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.577542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.577728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.577757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.577935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.577964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.578205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.578243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.578367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.578396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.578662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.578691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.578816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.578844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.579095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.579125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.579259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.579289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.579478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.579507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.579692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.579721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.579869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.579899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.580027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.580056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.580246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.580276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.580466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.580494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.580619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.580648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.580766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.580794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.581066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.581095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.581219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.581258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.581402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.581432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.581606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.581635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.581823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.581851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.582035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.582063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.582205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.582258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.582501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.582530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.582703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.582732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.582911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.582940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.583054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.583084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.583278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.583307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.333 [2024-07-13 01:00:57.583552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.333 [2024-07-13 01:00:57.583581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.333 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.583686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.583715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.583971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.584000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.584102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.584130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.584251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.584282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.584453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.584483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.584748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.584778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.584967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.584997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.585180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.585208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.585334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.585363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.585548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.585583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.585829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.585859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.586036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.586065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.586188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.586216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.586417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.586447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.586582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.586610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.586874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.586903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.587034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.587063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.587182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.587212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.587349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.587378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.587508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.587536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.587646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.587674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.587859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.587890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.587996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.588024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.588219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.588258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.588384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.588412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.588535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.588564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.588828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.588857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.588973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.589001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.589121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.589150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.589336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.334 [2024-07-13 01:00:57.589367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.334 qpair failed and we were unable to recover it. 00:35:46.334 [2024-07-13 01:00:57.589547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.589575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.589682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.589710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.589890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.589919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.590184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.590214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.590432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.590462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.590567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.590596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.590796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.590826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.591054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.591082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.591204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.591242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.591512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.591541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.591722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.591751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.591879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.591907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.592176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.592206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.592432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.592462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.592633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.592661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.592789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.592818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.592944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.592974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.593111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.593138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.593278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.593308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.593503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.593533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.593782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.593812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.594028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.594057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.594316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.335 [2024-07-13 01:00:57.594347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.335 qpair failed and we were unable to recover it. 00:35:46.335 [2024-07-13 01:00:57.594652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.594682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.594790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.594819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.594998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.595028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.595219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.595259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.595391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.595421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.595594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.595624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.595810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.595839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.595960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.595988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.596197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.596234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.596537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.596566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.596753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.596782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.597020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.597049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.597164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.597195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.597410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.597439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.597632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.597661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.597834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.597862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.598047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.598077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.598265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.598296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.598420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.598449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.598561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.598590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.598776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.598806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.599057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.599086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.599279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.599309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.599434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.599467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.599667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.599696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.599830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.599858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.600037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.600066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.600249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.600280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.600514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.600544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.600729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.600758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.600947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.336 [2024-07-13 01:00:57.600975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.336 qpair failed and we were unable to recover it. 00:35:46.336 [2024-07-13 01:00:57.601076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.601104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.601311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.601342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.601583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.601613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.601800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.601829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.602120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.602149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.602270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.602302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.602499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.602536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.602648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.602679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.602851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.602881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.603004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.603033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.603215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.603251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.603439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.603468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.603687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.603716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.603830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.603861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.603975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.604003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.604124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.604153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.604394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.604425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.604612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.604641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.604828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.604856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.605054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.605082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.605323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.605353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.605474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.605503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.605767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.605795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.605974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.606003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.606153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.606183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.606319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.606349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.606569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.606598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.606837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.606866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.607043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.607073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.607263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.607293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.337 [2024-07-13 01:00:57.607465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.337 [2024-07-13 01:00:57.607493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.337 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.607663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.607692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.607865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.607900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.608079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.608108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.608277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.608307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.608409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.608437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.608627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.608657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.608848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.608878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.609064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.609092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.609222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.609258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.609460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.609489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.609604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.609633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.609822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.609851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.610025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.610053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.610181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.610211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.610424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.610454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.610708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.610737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.610932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.610974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.611166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.611196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.611406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.611446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.611636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.611667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.611906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.611939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.612190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.612220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.612495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.612534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.612778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.612808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.612997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.613028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.613208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.613249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.613466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.613494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.613671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.613700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.613812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.613841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.614136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.614166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.614428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.614459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.614570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.338 [2024-07-13 01:00:57.614599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.338 qpair failed and we were unable to recover it. 00:35:46.338 [2024-07-13 01:00:57.614782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.614812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.615009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.615038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.615216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.615251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.615464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.615492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.615675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.615705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.615887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.615917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.616050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.616079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.616283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.616312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.616592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.616622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.616752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.616787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.617025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.617054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.617169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.617198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.617348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.617379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.617507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.617536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.617731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.617760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.617932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.617961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.618149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.618179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.618395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.618427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.618743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.618773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.618951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.618979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.619216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.619258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.619505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.619535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.619658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.619686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.619933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.619963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.620203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.620241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.620435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.620465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.620678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.620706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.620887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.620915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.621107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.621137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.621263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.621295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.621422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.621449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.621698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.621728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.621917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.339 [2024-07-13 01:00:57.621945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.339 qpair failed and we were unable to recover it. 00:35:46.339 [2024-07-13 01:00:57.622160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.622190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.622494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.622524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.622734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.622763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.623025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.623054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.623235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.623265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.623450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.623480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.623693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.623723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.623844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.623873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.623989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.624017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.624206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.624253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.624448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.624478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.624646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.624676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.624936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.624966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.625202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.625242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.625483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.625513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.625731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.625761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.625955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.625989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.626236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.626267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.626536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.626566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.626808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.626837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.626951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.626979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.627246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.627276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.627561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.627590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.627788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.627817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.627924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.627952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.628073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.628103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.628317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.628348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.628552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.628582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.340 qpair failed and we were unable to recover it. 00:35:46.340 [2024-07-13 01:00:57.628711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.340 [2024-07-13 01:00:57.628739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.628908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.628937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.629155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.629185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.629459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.629489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.629616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.629645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.629827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.629856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.630033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.630062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.630253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.630281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.630483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.630513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.630639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.630669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.630860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.630889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.631080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.631109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.631312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.631341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.631545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.631574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.631767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.631796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.632004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.632033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.632155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.632185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.632364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.632393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.632595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.632624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.632870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.632898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.633038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.633068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.633182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.633210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.633486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.633514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.633755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.633786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.633908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.633937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.634065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.634095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.634341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.341 [2024-07-13 01:00:57.634372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.341 qpair failed and we were unable to recover it. 00:35:46.341 [2024-07-13 01:00:57.634564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.634593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.634703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.634741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.634945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.634973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.635159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.635188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.635410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.635440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.635629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.635658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.635921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.635950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.636130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.636160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.636413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.636443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.636626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.636655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.636868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.636897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.637190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.637220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.637445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.637474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.637662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.637691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.637816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.637845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.638117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.638147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.638416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.638446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.638583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.638612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.638744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.638773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.638918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.638948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.639123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.639152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.639346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.639375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.639576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.639606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.639748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.639778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.639977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.640006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.640123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.640152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.640335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.640364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.640577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.640607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.640782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.640811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.342 [2024-07-13 01:00:57.640944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.342 [2024-07-13 01:00:57.640973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.342 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.641152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.641181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.641373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.641402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.641605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.641635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.641807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.641835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.642106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.642135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.642323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.642354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.642482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.642510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.642712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.642740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.642930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.642958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.643165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.643195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.643331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.643361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.643534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.643570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.643687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.643716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.643908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.643938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.644146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.644176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.644309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.644340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.644528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.644558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.644684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.644713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.644904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.644934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.645072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.645100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.645360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.645390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.645579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.645608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.645787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.645815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.646012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.646041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.646215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.646256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.646481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.646510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.646625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.646653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.646837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.646866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.647004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.647032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.647205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.647245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.647371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.647399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.647507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.343 [2024-07-13 01:00:57.647536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.343 qpair failed and we were unable to recover it. 00:35:46.343 [2024-07-13 01:00:57.647731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.647760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.647959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.647988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.648253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.648298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.648419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.648447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.648687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.648716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.648930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.648959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.649156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.649186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.649328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.649360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.649611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.649639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.649827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.649857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.650098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.650127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.650304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.650333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.650518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.650547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.650740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.650769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.650957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.650987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.651164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.651194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.651391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.651421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.651687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.651717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.651899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.651928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.652051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.652084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.652322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.652351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.652529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.652558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.652736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.652766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.652962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.652991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.653260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.653290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.653537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.653568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.653691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.653720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.653923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.653952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.654137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.344 [2024-07-13 01:00:57.654165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.344 qpair failed and we were unable to recover it. 00:35:46.344 [2024-07-13 01:00:57.654356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.654386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.654514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.654543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.654710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.654740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.654850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.654879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.655080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.655110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.655311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.655341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.655604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.655634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.655803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.655832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.655950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.655980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.656099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.656128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.656370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.656400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.656571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.656600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.656795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.656823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.657087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.657117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.657308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.657337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.657549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.657578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.657765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.657795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.658038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.658067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.658331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.658361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.658600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.658628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.658750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.658780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.659021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.659056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.659244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.659274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.659513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.659543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.659787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.659815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.659993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.660021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.660206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.660264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.660396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.660426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.660611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.660641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.660827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.660856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.661028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.661061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.345 qpair failed and we were unable to recover it. 00:35:46.345 [2024-07-13 01:00:57.661180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.345 [2024-07-13 01:00:57.661209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.661406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.661436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.661541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.661569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.661747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.661778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.661965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.661995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.662166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.662195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.662473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.662503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.662637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.662665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.662872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.662900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.663012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.663040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.663240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.663270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.663400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.663429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.663616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.663645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.663823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.663853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.663966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.663996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.664115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.664145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.664270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.664299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.664428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.664456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.664636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.664664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.664844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.664874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.665106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.665135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.665269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.665301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.665431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.346 [2024-07-13 01:00:57.665460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.346 qpair failed and we were unable to recover it. 00:35:46.346 [2024-07-13 01:00:57.665759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.665788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.665978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.666007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.666196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.666232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.666416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.666447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.666573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.666601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.666776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.666804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.666937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.666967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.667181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.667212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.667339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.667368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.667539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.667568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.667780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.667809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.667933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.667964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.668251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.668282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.668453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.668483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.668659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.668687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.668900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.668929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.669193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.669235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.669411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.669438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.669676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.669704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.669830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.669859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.669977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.670006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.670181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.670209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.670418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.670448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.670672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.670702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.670838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.670866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.670986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.671014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.671220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.671260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.671385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.671413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.671587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.671615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.671800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.671828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.347 [2024-07-13 01:00:57.672078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.347 [2024-07-13 01:00:57.672108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.347 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.672306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.672337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.672539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.672568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.672815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.672844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.673033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.673062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.673260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.673291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.673414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.673442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.673683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.673712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.673846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.673875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.674068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.674098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.674285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.674316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.674455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.674483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.674592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.674620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.674810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.674841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.675013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.675042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.675208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.675248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.675422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.675450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.675581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.675611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.675794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.675824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.675956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.675985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.676098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.676128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.676260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.676290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.676475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.676503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.676688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.676718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.676894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.676923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.677099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.677129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.677310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.677349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.677555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.677584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.677753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.677781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.677916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.677946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.678143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.678173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.678417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.678448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.348 qpair failed and we were unable to recover it. 00:35:46.348 [2024-07-13 01:00:57.678641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.348 [2024-07-13 01:00:57.678670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.678789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.678818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.678991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.679021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.679138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.679168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.679285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.679315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.679446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.679477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.679661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.679691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.679879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.679908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.680176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.680205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.680397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.680427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.680667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.680696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.680868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.680896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.681066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.681096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.681366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.681397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.681594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.681623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.681824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.681856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.681984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.682012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.682257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.682289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.682545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.682574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.682704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.682734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.682916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.682944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.683242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.683307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.683532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.683566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.683696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.683729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.683868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.683898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.684102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.684132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.684254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.684284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.684479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.684509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.684622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.684651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.684825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.684855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.349 qpair failed and we were unable to recover it. 00:35:46.349 [2024-07-13 01:00:57.685122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.349 [2024-07-13 01:00:57.685151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.685267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.685302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.685490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.685519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.685700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.685729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.685944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.685974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.686105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.686134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.686257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.686286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.686431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.686461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.686559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.686586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.686706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.686735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.686858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.686888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.687154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.687185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.687381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.687410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.687587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.687617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.687732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.687761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.687899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.687929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.688048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.688077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.688286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.688316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.688488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.688523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.688653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.688682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.688789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.688818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.689057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.689088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.689300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.689332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.689469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.689498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.689747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.689777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.689971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.690000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.690177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.690206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.690333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.690363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.690485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.690514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.690687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.690716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.690846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.350 [2024-07-13 01:00:57.690875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.350 qpair failed and we were unable to recover it. 00:35:46.350 [2024-07-13 01:00:57.691053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.691083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.691335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.691365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.691476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.691505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.691763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.691792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.691982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.692012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.692279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.692309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.692438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.692467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.692650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.692680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.692971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.693002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.693191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.693222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.693355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.693384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.693568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.693598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.693718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.693748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.693871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.693900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.694105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.694141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.694336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.694368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.694490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.694520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.694659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.694689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.694811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.694840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.694962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.694992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.695190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.695220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.695400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.695429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.695612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.695642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.695823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.695852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.695983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.696012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.696130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.696159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.696293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.696324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.696513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.696542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.696735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.696765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.696957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.696986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.697171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.697200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.351 qpair failed and we were unable to recover it. 00:35:46.351 [2024-07-13 01:00:57.697386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.351 [2024-07-13 01:00:57.697416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.697655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.697685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.697874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.697904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.698030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.698060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.698300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.698331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.698506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.698535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.698773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.698803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.698981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.699011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.699125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.699154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.699331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.699363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.699487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.699522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.699711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.699740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.699863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.699892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.700070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.700100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.700285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.700315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.700435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.700463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.700635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.700665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.700853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.700883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.701074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.701103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.701339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.701369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.701489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.701518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.701699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.701729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.701921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.701951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.702075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.702104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.702366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.702397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.702572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.702602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.352 qpair failed and we were unable to recover it. 00:35:46.352 [2024-07-13 01:00:57.702786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.352 [2024-07-13 01:00:57.702815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.703054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.703084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.703210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.703246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.703380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.703409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.703530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.703559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.703690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.703720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.703902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.703931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.704109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.704140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.704324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.704354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.704483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.704512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.704627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.704656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.704919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.704949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.705071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.705100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.705273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.705303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.705419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.705448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.705633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.705663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.705780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.705810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.706065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.706094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.706321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.706353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.706536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.706566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.706821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.706850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.707038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.707067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.707182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.707212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.707353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.707384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.707582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.707611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.707785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.707815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.708073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.708104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.708238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.708269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.708440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.708470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.708644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.708674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.708860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.708889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.353 [2024-07-13 01:00:57.709029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.353 [2024-07-13 01:00:57.709059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.353 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.709202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.709240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.709471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.709501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.709684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.709713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.709917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.709952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.710193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.710240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.710374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.710404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.710667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.710697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.710890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.710920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.711095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.711125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.711368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.711398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.711536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.711565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.711739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.711769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.711877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.711907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.712035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.712065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.712253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.712283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.712451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.712481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.712655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.712684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.712876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.712906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.713082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.713112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.713360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.713391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.713509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.713542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.713718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.713747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.713939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.713968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.714171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.714201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.714448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.714478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.714649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.714679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.714942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.714972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.715183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.715213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.715499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.715528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.354 [2024-07-13 01:00:57.715806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.354 [2024-07-13 01:00:57.715835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.354 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.716019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.716049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.716236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.716268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.716483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.716512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.716701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.716731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.716871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.716899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.717075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.717104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.717274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.717304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.717485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.717514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.717653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.717683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.717947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.717976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.718167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.718197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.718421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.718452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.718585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.718614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.718873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.718903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.719080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.719109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.719239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.719270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.719372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.719401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.719525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.719560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.719744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.719773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.719899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.719928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.720112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.720141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.720431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.720462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.720587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.720615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.720746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.720776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.720903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.720932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.721063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.721092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.721209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.721249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.721489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.721519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.721762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.721791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.721899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.721930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.355 [2024-07-13 01:00:57.722049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.355 [2024-07-13 01:00:57.722078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.355 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.722344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.722375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.722496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.722525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.722657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.722687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.722878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.722907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.723091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.723121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.723409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.723441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.723559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.723589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.723732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.723762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.723939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.723968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.724097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.724126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.724321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.724352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.724525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.724554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.724828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.724858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.725050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.725079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.725290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.725321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.725429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.725459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.725696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.725725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.725962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.725992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.726205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.726253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.726443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.726471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.726666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.726696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.726875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.726904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.727097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.727127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.727241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.727272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.727531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.727561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.727747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.727777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.727972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.728002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.728249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.728280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.728403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.728432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.728622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.728651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.728860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.356 [2024-07-13 01:00:57.728890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.356 qpair failed and we were unable to recover it. 00:35:46.356 [2024-07-13 01:00:57.729094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.729123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.729317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.729348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.729559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.729588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.729714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.729743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.729927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.729956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.730221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.730258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.730452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.730482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.730728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.730757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.730943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.730972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.731244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.731274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.731516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.731545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.731667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.731697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.731963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.731992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.732258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.732288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.732476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.732505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.732746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.732777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.732984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.733013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.733205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.733246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.733448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.733476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.733688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.733718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.733977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.734007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.734201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.734252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.734380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.734409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.734655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.734689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.734930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.734960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.735223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.735265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.735447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.735476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.735655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.735685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.735921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.735951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.736172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.736201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.357 [2024-07-13 01:00:57.736471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.357 [2024-07-13 01:00:57.736500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.357 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.736622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.736651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.736886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.736915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.737096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.737125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.737258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.737290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.737413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.737444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.737685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.737718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.737894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.737923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.738034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.738063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.738271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.738310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.738513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.738542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.738745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.738775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.738960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.738989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.739243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.739275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.739393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.739423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.739571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.739600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.739774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.739804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.740092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.740123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.740322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.740352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.740483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.740512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.740634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.740671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.740926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.740956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.741087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.741117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.741324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.358 [2024-07-13 01:00:57.741354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.358 qpair failed and we were unable to recover it. 00:35:46.358 [2024-07-13 01:00:57.741598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.741628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.741805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.741835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.741953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.741983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.742100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.742129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.742301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.742332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.742518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.742549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.742756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.742785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.742907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.742937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.743120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.743151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.743290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.743321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.743506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.743536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.743707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.743736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.743925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.743955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.744138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.744168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.744351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.744381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.744587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.744617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.744752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.744782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.745029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.745059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.745272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.745303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.745505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.745536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.745714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.745743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.745931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.745961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.746085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.746115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.746254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.746290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.746469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.746499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.746762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.746793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.746979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.747008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.747121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.747150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.747333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.747364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.747488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.359 [2024-07-13 01:00:57.747518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.359 qpair failed and we were unable to recover it. 00:35:46.359 [2024-07-13 01:00:57.747710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.747740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.747909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.747939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.748044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.748073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.748264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.748294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.748576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.748605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.748807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.748838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.749037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.749068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.749280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.749311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.749517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.749547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.749739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.749769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.749949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.749979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.750159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.750190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.750392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.750422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.750624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.750654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.750771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.750801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.750941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.750971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.751090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.751120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.751248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.751278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.751451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.751481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.751600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.751630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.751822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.751853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.752065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.752096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.752245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.752289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.752557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.752588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.752758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.752787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.753025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.753055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.753245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.753282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.753401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.753431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.753614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.753644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.753833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.753862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.754145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.754174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.754332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.360 [2024-07-13 01:00:57.754364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.360 qpair failed and we were unable to recover it. 00:35:46.360 [2024-07-13 01:00:57.754554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.754584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.754707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.754737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.754848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.754883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.755054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.755085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.755275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.755306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.755507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.755537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.755659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.755688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.755867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.755897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.756091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.756122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.756258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.756289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.756406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.756440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.756575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.756606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.756731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.756760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.756874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.756904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.757143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.757173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.757446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.757478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.757743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.757772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.757992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.758021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.758159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.758189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.758341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.758372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.758611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.758642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.758757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.758787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.758958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.758988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.759102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.759133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.759369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.759399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.759605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.759635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.759757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.759786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.759987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.760017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.760123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.760152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.760401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.760440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.760634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.361 [2024-07-13 01:00:57.760664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.361 qpair failed and we were unable to recover it. 00:35:46.361 [2024-07-13 01:00:57.760870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.760900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.761089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.761118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.761317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.761347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.761456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.761485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.761673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.761703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.761901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.761930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.762173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.762202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.762441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.762472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.762693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.762723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.762963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.762993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.763101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.763131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.763372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.763403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.763547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.763578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.763749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.763778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.763952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.763982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.764096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.764126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.764315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.764346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.764516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.764544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.764807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.764836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.765106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.765136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.765262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.765293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.765408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.765437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.765645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.765674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.765773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.765802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.766054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.766085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.766353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.766389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.766510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.766539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.766730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.766759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.766896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.766926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.767166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.767195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.362 [2024-07-13 01:00:57.767381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.362 [2024-07-13 01:00:57.767412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.362 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.767601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.767630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.767816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.767845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.768124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.768154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.768270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.768300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.768564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.768593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.768772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.768802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.768916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.768944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.769127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.769157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.769281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.769311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.769590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.769620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.769807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.769837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.770024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.770053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.770317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.770347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.770622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.770653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.770823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.770852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.771051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.771081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.771289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.771319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.771458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.771487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.771745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.771775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.772012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.772041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.772246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.772277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.772388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.772418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.772643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.772673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.772883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.772913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.773195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.773233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.773377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.773408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.363 [2024-07-13 01:00:57.773597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.363 [2024-07-13 01:00:57.773626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.363 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.773887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.773916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.774040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.774069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.774204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.774261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.774522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.774551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.774787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.774816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.774930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.774960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.775090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.775120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.775305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.775335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.775551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.775581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.775755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.775785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.775909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.775939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.776178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.776208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.776390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.776420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.776677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.776706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.776921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.776951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.777121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.777150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.777284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.777314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.777566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.777595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.777771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.777800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.778062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.778091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.778283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.778314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.778512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.778542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.778682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.778712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.778816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.778846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.779016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.779046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.779250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.779280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.779414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.779445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.779570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.779599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.779778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.779808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.364 qpair failed and we were unable to recover it. 00:35:46.364 [2024-07-13 01:00:57.779929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.364 [2024-07-13 01:00:57.779959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.780085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.780115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.780382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.780413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.780661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.780690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.780880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.780909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.781103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.781133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.781256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.781291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.781416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.781446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.781624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.781653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.781787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.781818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.782062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.782091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.782291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.782322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.782525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.782554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.782743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.782773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.782897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.782927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.783110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.783140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.783317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.783348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.783476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.783506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.783622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.783652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.783827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.783857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.784054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.784083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.784351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.784381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.784554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.784583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.784705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.784735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.784927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.784957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.785132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.785161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.785271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.785302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.785473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.785504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.785679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.785707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.785905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.785935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.786177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.786207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.786488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.786518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.365 [2024-07-13 01:00:57.786690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.365 [2024-07-13 01:00:57.786719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.365 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.787030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.787065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.787359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.787389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.787520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.787550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.787806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.787835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.787983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.788014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.788196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.788234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.788368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.788397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.788536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.788565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.788808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.788838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.788959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.788989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.789245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.789276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.789521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.789550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.789742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.789772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.789896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.789926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.790105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.790134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.790272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.790303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.790428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.790458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.790722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.790751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.790926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.790956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.791096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.791126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.791303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.791333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.791573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.791602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.791805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.791834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.792008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.792037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.792164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.792194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.792410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.792441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.792556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.792585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.792825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.792864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.793051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.793080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.366 qpair failed and we were unable to recover it. 00:35:46.366 [2024-07-13 01:00:57.793264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.366 [2024-07-13 01:00:57.793295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.793534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.793563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.793843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.793872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.794116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.794146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.794319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.794350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.794471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.794501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.794628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.794657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.794867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.794896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.795019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.795048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.795219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.795257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.795389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.795418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.795672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.795701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.795888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.795956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.796164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.796197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.796343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.796375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.796492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.796522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.796812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.796843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.797037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.797066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.797201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.797241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.797431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.797461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.797702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.797731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.797869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.797899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.798018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.798047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.798183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.798214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.798351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.798380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.798644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.798684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.798829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.798857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.799063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.799092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.799280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.799312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.799492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.367 [2024-07-13 01:00:57.799521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.367 qpair failed and we were unable to recover it. 00:35:46.367 [2024-07-13 01:00:57.799641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.799671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.799861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.799890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.800075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.800105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.800276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.800308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.800489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.800519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.800706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.800737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.800860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.800889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.801014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.801045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.801214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.801255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.801373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.801403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.801641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.801672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.801808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.801838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.802063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.802092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.802305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.802336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.802594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.802624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.802796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.802826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.802937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.802966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.803152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.803183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.803453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.803484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.803605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.803635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.803823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.803852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.803957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.803988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.804257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.804290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.804471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.804501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.804692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.804721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.804986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.805016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.805218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.805257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.805380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.805409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.805645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.805674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.805852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.805882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.368 qpair failed and we were unable to recover it. 00:35:46.368 [2024-07-13 01:00:57.806166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.368 [2024-07-13 01:00:57.806195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.806407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.806438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.806644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.806673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.806939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.806969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.807160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.807189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.807420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.807451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.807587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.807617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.807792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.807822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.808065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.808095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.808282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.808312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.808486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.808516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.808647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.808677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.808850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.808879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.809051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.809080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.809264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.809293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.809544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.809573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.809689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.809718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.809905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.809934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.810121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.810150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.810425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.810461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.810593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.810623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.810800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.810829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.811088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.811117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.811366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.811396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.811528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.811557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.811822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.369 [2024-07-13 01:00:57.811852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.369 qpair failed and we were unable to recover it. 00:35:46.369 [2024-07-13 01:00:57.812045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.812074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.812188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.812218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.812419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.812448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.812579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.812608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.812780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.812810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.813010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.813039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.813279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.813310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.813559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.813588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.813760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.813789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.813984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.814014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.814273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.814303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.814472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.814501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.814692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.814721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.814916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.814946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.815080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.815110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.815286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.815317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.815485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.815514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.815693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.815722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.815849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.815878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.816052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.816082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.816321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.816356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.816533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.816563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.816744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.816774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.816891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.816921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.817110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.817140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.817382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.817413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.817648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.817677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.817855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.817885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.818079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.818108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.818237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.818268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.370 qpair failed and we were unable to recover it. 00:35:46.370 [2024-07-13 01:00:57.818456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.370 [2024-07-13 01:00:57.818486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.818598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.818628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.818801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.818830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.819005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.819035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.819248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.819278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.819480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.819509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.819698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.819728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.819901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.819932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.820099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.820128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.820256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.820288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.820500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.820530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.820708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.820738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.820911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.820940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.821062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.821092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.821299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.821330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.821519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.821548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.821718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.821748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.821869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.821904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.822044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.822072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.822184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.822215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.822345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.822375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.822584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.822614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.822805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.822835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.823069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.823099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.823269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.823299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.823472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.823501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.823680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.823710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.823905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.823935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.824104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.824133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.824313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.824344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.824516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.824545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.371 qpair failed and we were unable to recover it. 00:35:46.371 [2024-07-13 01:00:57.824819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.371 [2024-07-13 01:00:57.824850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.824973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.825003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.825157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.825186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.825319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.825349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.825544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.825574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.825691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.825720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.825954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.825984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.826151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.826180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.826371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.826402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.826581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.826610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.826785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.826814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.826983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.827012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.827148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.827179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.827458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.827489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.827603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.827632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.827846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.827876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.828089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.828119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.828301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.828332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.828526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.828556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.828668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.828698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.828820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.828850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.828969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.828999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.829175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.829204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.829395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.829426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.829721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.829751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.830020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.830049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.830290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.830320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.830590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.830620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.830883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.830913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.831104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.831134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.831310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.831341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.831535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.372 [2024-07-13 01:00:57.831564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.372 qpair failed and we were unable to recover it. 00:35:46.372 [2024-07-13 01:00:57.831742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.831772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.832009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.832040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.832245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.832276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.832533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.832562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.832759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.832790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.832976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.833005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.833190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.833220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.833344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.833374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.833640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.833670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.833849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.833879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.834021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.834050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.834312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.834343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.834527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.834558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.834730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.834759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.835017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.835047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.835352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.835382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.835584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.835614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.835818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.835847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.835988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.836017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.836237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.836267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.836379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.836410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.836591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.836621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.836739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.836774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.836977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.837007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.837126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.837156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.837412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.837442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.837656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.837686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.837922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.837951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.838127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.838157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.838343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.838374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.838660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.838689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.838950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.838980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.839127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.839156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.373 [2024-07-13 01:00:57.839327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.373 [2024-07-13 01:00:57.839357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.373 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.839456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.839486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.839609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.839638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.839885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.839915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.840154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.840183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.840475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.840505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.840689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.840719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.840842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.840873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.841149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.841178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.841358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.841388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.841648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.841677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.841955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.841984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.842170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.842199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.842399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.842429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.842600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.842630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.842808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.842838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.843033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.843067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.843245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.843276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.843461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.843491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.843760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.843789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.843985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.844015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.844235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.844266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.844462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.844491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.844691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.844721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.844850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.844879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.845063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.845093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.845275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.845305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.845572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.845602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.845839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.845868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.845973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.846002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.846183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.846212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.846337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.846367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.846630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.846659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.846845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.846874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.846984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.847013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.847139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.374 [2024-07-13 01:00:57.847169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.374 qpair failed and we were unable to recover it. 00:35:46.374 [2024-07-13 01:00:57.847309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.847339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.847528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.847557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.847672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.847701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.847878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.847909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.848030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.848060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.848260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.848290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.848529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.848558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.848681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.848710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.848831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.848861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.848976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.849005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.849223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.849265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.849393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.849423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.849560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.849589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.849768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.849797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.849984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.850013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.850202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.850242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.850341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.850370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.850545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.850574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.850746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.850776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.850913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.850943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.851131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.851160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.851352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.851384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.851559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.851587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.851697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.851727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.851896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.851925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.852165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.852194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.852376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.852407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.852598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.852628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.852804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.852833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.375 [2024-07-13 01:00:57.852966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.375 [2024-07-13 01:00:57.852995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.375 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.853187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.853215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.853398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.853429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.853630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.853660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.853831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.853860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.854059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.854088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.854220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.854258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.854451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.854480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.854653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.854682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.854882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.854912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.855158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.855187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.855371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.855401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.855594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.855623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.855863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.855892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.856021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.856051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.856286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.856317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.856623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.856656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.856781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.856810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.857055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.857085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.857343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.857379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.857553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.857583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.857752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.857782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.858048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.858078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.858318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.858349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.858533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.858562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.858830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.858859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.859072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.859102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.859218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.859278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.859453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.859482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.859677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.859707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.859834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.859864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.860104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.860134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.860422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.860453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.860603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.860632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.860770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.860800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.861000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.861029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.861157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.861186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.376 [2024-07-13 01:00:57.861431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.376 [2024-07-13 01:00:57.861461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.376 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.861625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.861663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.861854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.861885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.862078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.862108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.862244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.862275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.862455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.862484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.862586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.862615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.862810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.862840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.863102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.863132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.863276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.863313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.863588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.863623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.863832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.863864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.863972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.864002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.864246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.864277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.864466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.864496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.864632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.864662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.864850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.864882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.865127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.865157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.865401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.865441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.865653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.865685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.865948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.865977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.866222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.866272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.866541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.866570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.866842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.866872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.867124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.867154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.867281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.867312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.867420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.867449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.867665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.867694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.867818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.867848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.377 qpair failed and we were unable to recover it. 00:35:46.377 [2024-07-13 01:00:57.867965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.377 [2024-07-13 01:00:57.867995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-07-13 01:00:57.868257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-07-13 01:00:57.868287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-07-13 01:00:57.868472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-07-13 01:00:57.868502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-07-13 01:00:57.868621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-07-13 01:00:57.868650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-07-13 01:00:57.868850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-07-13 01:00:57.868880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-07-13 01:00:57.869109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-07-13 01:00:57.869139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.657 [2024-07-13 01:00:57.869317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.657 [2024-07-13 01:00:57.869347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.657 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.869542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.869578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.869706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.869736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.870022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.870051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.870174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.870204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.870399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.870428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.870670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.870703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.870824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.870854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.871047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.871077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.871210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.871250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.871520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.871550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.871767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.871797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.871938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.871967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.872152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.872182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.872388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.872420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.872642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.872671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.872844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.872873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.873066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.873094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.873269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.873299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.873430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.873458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.873652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.873680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.873801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.873829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.874090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.874118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.874354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.874385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.874634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.874661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.874851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.874880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.875135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.875164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.875290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.875319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.875486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.875513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.875696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.875724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.875958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.875987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.876168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.876196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.876377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.876406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.876600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.876628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.876798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.876826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.877083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.877110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.877264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.877294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.877411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.877440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.877649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.877677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.877920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.877947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.878191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.878220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.658 qpair failed and we were unable to recover it. 00:35:46.658 [2024-07-13 01:00:57.878370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.658 [2024-07-13 01:00:57.878399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.878508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.878536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.878722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.878750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.878925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.878954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.879075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.879104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.879276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.879306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.879493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.879522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.879650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.879679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.879864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.879893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.880066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.880097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.880350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.880380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.880579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.880609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.880727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.880757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.880886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.880915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.881159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.881189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.881440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.881502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.881769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.881805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.882023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.882053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.882192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.882223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.882340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.882370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.882558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.882588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.882687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.882716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.882838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.882869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.883155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.883185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.883385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.883417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.883619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.883649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.883834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.883865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.884041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.884071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.884255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.884287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.884477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.884507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.884698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.884729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.884914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.884944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.885133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.885163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.885343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.885369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.885631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.885656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.885828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.885852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.886053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.886078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.886187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.886211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.886391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.886417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.886538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.886563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.886672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.659 [2024-07-13 01:00:57.886697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.659 qpair failed and we were unable to recover it. 00:35:46.659 [2024-07-13 01:00:57.886810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.886840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.887008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.887033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.887241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.887267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.887364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.887388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.887562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.887587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.887701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.887728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.887985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.888009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.888112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.888137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.888243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.888268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.888438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.888464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.888631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.888656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.888830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.888855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.889037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.889067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.889261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.889291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.889512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.889543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.889716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.889746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.889868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.889897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.890023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.890049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.890218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.890252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.890489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.890514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.890709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.890735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.890842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.890867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.890974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.891000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.891161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.891186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.891303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.891328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.891445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.891470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.891670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.891694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.891814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.891840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.892011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.892036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.892267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.892293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.892406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.892431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.892602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.892627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.892831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.892857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.893034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.893058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.893175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.893200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.893385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.893411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.893525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.893549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.893684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.893708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.893912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.893936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.894057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.894082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.894202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.660 [2024-07-13 01:00:57.894238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.660 qpair failed and we were unable to recover it. 00:35:46.660 [2024-07-13 01:00:57.894359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.894384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.894632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.894658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.894820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.894846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.895035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.895061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.895168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.895194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.895326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.895357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.895595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.895625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.895730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.895760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.895891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.895921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.896098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.896128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.896377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.896409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.896530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.896558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.896669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.896698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.896879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.896909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.897095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.897125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.897301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.897331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.897534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.897563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.897745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.897775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.897898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.897927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.898028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.898058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.898170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.898200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.898453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.898484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.898751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.898780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.898900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.898931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.899057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.899087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.899283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.899313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.899483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.899552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.899847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.899881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.900009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.900040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.900181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.900214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.900345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.900376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.900622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.900652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.900843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.900873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.900990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.901021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.901139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.901168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.901353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.901384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.901491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.901520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.901698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.901728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.901849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.901878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.902058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.902087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.902266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.661 [2024-07-13 01:00:57.902297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.661 qpair failed and we were unable to recover it. 00:35:46.661 [2024-07-13 01:00:57.902478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.902509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.902684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.902713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.902891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.902920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.903093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.903122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.903319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.903351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.903554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.903584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.903704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.903733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.903874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.903903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.904080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.904109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.904297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.904328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.904455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.904484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.904678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.904708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.904842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.904878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.905065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.905094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.905237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.905267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.905457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.905486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.905679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.905708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.905879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.905908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.906116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.906145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.906387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.906418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.906562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.906591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.906704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.906734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.906997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.907027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.907214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.907254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.907502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.907532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.907782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.907818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.908006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.908035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.908157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.908186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.908375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.908406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.908528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.908558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.908675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.908709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.908841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.908870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.909063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.909093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.909208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.662 [2024-07-13 01:00:57.909247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.662 qpair failed and we were unable to recover it. 00:35:46.662 [2024-07-13 01:00:57.909375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.909405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.909517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.909545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.909723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.909755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.909863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.909892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.910015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.910043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.910283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.910320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.910520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.910556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.910757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.910786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.910908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.910937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.911117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.911146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.911273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.911306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.911427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.911455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.911651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.911680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.911964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.911993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.912180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.912209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.912334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.912370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.912614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.912643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.912824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.912853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.912972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.913000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.913111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.913140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.913311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.913342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.913467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.913496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.913606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.913634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.913825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.913854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.914060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.914089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.914333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.914364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.914468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.914497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.914702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.914731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.914859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.914887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.915093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.915123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.915308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.915338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.915528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.915560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.915804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.915840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.916025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.916054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.916190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.916219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.916353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.916383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.916503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.916532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.916647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.916677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.916892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.916921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.917105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.917134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.917333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.663 [2024-07-13 01:00:57.917363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.663 qpair failed and we were unable to recover it. 00:35:46.663 [2024-07-13 01:00:57.917486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.917515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.917634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.917662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.917785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.917813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.917987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.918016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.918203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.918244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.918470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.918538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.918667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.918701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.918820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.918849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.919114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.919144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.919381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.919412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.919539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.919569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.919740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.919769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.919972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.920001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.920125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.920155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.920382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.920413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.920611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.920641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.920751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.920781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.920890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.920920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.921043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.921078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.921249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.921279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.921411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.921441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.921677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.921707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.921888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.921918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.922087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.922117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.922294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.922325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.922444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.922474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.922613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.922642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.922827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.922857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.922997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.923027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.923219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.923257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.923450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.923479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.923687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.923716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.923897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.923927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.924170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.924200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.924382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.924411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.924530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.924560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.924748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.924778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.924908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.924937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.925060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.925089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.925197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.925235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.925444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.664 [2024-07-13 01:00:57.925474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.664 qpair failed and we were unable to recover it. 00:35:46.664 [2024-07-13 01:00:57.925675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.925704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.925889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.925918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.926103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.926133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.926259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.926289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.926524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.926593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.926815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.926857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.927039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.927070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.927191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.927221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.927486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.927517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.927712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.927742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.927844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.927873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.928019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.928049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.928190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.928219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.928426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.928456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.928582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.928611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.928853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.928883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.929052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.929081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.929284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.929322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.929468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.929497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.929623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.929653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.929845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.929874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.929982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.930012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.930113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.930142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.930358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.930388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.930570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.930600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.930790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.930819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.930993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.931022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.931263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.931293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.931410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.931439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.931719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.931749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.931924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.931953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.932148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.932178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.932396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.932425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.932533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.932563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.932681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.932710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.932854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.932885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.933070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.933099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.933306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.933337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.933527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.933557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.933687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.933717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.933834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.665 [2024-07-13 01:00:57.933868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.665 qpair failed and we were unable to recover it. 00:35:46.665 [2024-07-13 01:00:57.933991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.934020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.934132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.934161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.934285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.934315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.934470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.934539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.934860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.934929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.935073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.935106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.935304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.935336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.935456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.935486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.935731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.935762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.935883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.935912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.936152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.936182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.936373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.936403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.936646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.936675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.936781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.936811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.936932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.936961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.937242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.937273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.937380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.937408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.937604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.937633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.937835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.937864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.938031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.938058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.938182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.938210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.938408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.938438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.938648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.938678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.938793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.938822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.939010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.939039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.939210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.939251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.939535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.939564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.939744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.939774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.939959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.939988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.940164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.940193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.940336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.940373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.940565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.940594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.940833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.940863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.941113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.941142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.941376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.666 [2024-07-13 01:00:57.941409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.666 qpair failed and we were unable to recover it. 00:35:46.666 [2024-07-13 01:00:57.941537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.941567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.941744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.941773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.941991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.942020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.942140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.942172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.942319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.942350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.942468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.942498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.942621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.942650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.942919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.942949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.943189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.943220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.943412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.943441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.943622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.943651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.943778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.943807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.943910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.943939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.944053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.944087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.944290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.944319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.944557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.944587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.944705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.944734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.944858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.944887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.945071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.945102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.945214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.945254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.945375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.945404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.945592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.945621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.945865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.945904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.946099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.946129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.946247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.946278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.946461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.946490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.946676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.946705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.946817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.946847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.946955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.946984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.947161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.947190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.947438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.947469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.947578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.947607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.947727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.947761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.947933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.947962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.948074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.948103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.948348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.948378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.948617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.948646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.948834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.948864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.949044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.949073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.949195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.949231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.949438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.667 [2024-07-13 01:00:57.949468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.667 qpair failed and we were unable to recover it. 00:35:46.667 [2024-07-13 01:00:57.949726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.949756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.949947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.949976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.950207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.950249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.950439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.950469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.950679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.950708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.950826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.950854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.951045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.951074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.951259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.951290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.951490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.951525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.951715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.951745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.951866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.951896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.952092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.952121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.952260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.952291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.952411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.952441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.952639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.952668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.952782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.952811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.953122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.953151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.953344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.953375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.953553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.953583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.953776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.953805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.953937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.953967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.954145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.954175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.954383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.954413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.954606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.954635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.954810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.954840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.955041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.955070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.955269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.955300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.955468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.955497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.955619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.955648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.955817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.955847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.956087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.956117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.956291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.956321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.956439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.956468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.956642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.956672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.956888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.956918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.957027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.957056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.957263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.957294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.957421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.957450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.957666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.957695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.957870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.957899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.958072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.958103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.668 qpair failed and we were unable to recover it. 00:35:46.668 [2024-07-13 01:00:57.958310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.668 [2024-07-13 01:00:57.958340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.958512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.958541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.958733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.958763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.958945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.958980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.959154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.959185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.959392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.959423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.959592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.959621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.959818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.959852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.960031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.960067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.960202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.960246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.960535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.960564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.960748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.960781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.960963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.960993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.961200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.961262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.961394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.961423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.961547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.961576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.961697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.961729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.961851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.961891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.962072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.962102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.962276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.962306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.962426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.962456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.962735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.962768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.962890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.962922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.963075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.963104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.963214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.963253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.963422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.963452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.963581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.963613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.963742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.963770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.963959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.963991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.964096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.964125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.964297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.964328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.964447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.964476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.964654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.964683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.964805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.964835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.964972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.965000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.965269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.965310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.965486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.965515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.965682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.965711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.965890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.965920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.966039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.966069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.966250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.966281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.669 [2024-07-13 01:00:57.966403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.669 [2024-07-13 01:00:57.966433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.669 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.966606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.966636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.966753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.966782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.967089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.967118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.967251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.967281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.967453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.967483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.967673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.967702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.967840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.967869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.967991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.968020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.968220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.968272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.968455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.968484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.968723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.968752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.968881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.968910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.969106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.969135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.969323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.969353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.969538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.969568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.969702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.969732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.969860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.969890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.970007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.970035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.970167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.970196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.970381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.970411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.970602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.970636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.970804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.970833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.971004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.971033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.971158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.971187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.971370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.971400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.971580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.971609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.971734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.971763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.971877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.971907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.972097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.972127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.972268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.972298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.972539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.972568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.972696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.972726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.972864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.972893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.973069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.973099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.973285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.973316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.670 [2024-07-13 01:00:57.973578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.670 [2024-07-13 01:00:57.973607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.670 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.973823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.973853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.974096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.974124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.974299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.974330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.974465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.974494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.974684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.974713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.974823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.974852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.975026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.975054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.975176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.975205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.975475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.975505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.975617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.975647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.975771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.975800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.975935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.975965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.976175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.976205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.976412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.976441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.976627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.976657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.976895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.976924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.977097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.977126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.977306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.977338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.977625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.977654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.977845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.977874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.978057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.978087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.978272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.978303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.978430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.978459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.978633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.978662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.978775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.978804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.979015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.979045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.979240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.979271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.979442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.979472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.979713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.979742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.979948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.979977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.980178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.980207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.980335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.980364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.980604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.980634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.980736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.980765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.981028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.981058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.981177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.981207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.981339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.981368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.981537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.981567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.981769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.981798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.982040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.982068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.671 [2024-07-13 01:00:57.982248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.671 [2024-07-13 01:00:57.982276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.671 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.982541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.982569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.982808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.982836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.983097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.983125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.983223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.983259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.983483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.983511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.983650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.983677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.983918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.983946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.984053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.984080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.984190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.984218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.984462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.984490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.984752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.984780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.984903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.984937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.985166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.985194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.985405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.985435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.985561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.985589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.985761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.985789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.985899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.985927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.986056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.986084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.986184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.986212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.986401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.986430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.986701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.986729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.986987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.987016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.987132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.987160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.987286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.987315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.987448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.987477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.987680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.987709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.987882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.987910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.988032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.988060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.988177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.988205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.988341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.988371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.988507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.988538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.988655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.988685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.988786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.988815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.988998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.989028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.989216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.989255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.989443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.989474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.989651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.989681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.989919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.989949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.990124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.990159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.990346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.990376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.990503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.672 [2024-07-13 01:00:57.990533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.672 qpair failed and we were unable to recover it. 00:35:46.672 [2024-07-13 01:00:57.990656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.990685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.990875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.990904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.991203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.991246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.991464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.991493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.991617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.991646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.991822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.991852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.992062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.992091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.992213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.992266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.992386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.992416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.992636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.992665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.992874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.992903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.993113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.993143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.993281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.993312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.993488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.993517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.993696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.993725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.993904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.993933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.994121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.994150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.994491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.994521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.994781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.994810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.994985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.995014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.995192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.995221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.995360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.995389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.995594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.995623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.995741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.995770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.995953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.995987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.996196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.996233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.996472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.996501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.996621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.996651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.996893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.996922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.997115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.997144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.997337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.997368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.997540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.997568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.997750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.997779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.997948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.997978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.998120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.998149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.998324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.998361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.998467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.998496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.998674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.998704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.998816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.998845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.998962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.998991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.999158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.673 [2024-07-13 01:00:57.999188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.673 qpair failed and we were unable to recover it. 00:35:46.673 [2024-07-13 01:00:57.999389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:57.999419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:57.999681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:57.999710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:57.999833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:57.999862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.000003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.000033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.000212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.000248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.000347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.000376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.000501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.000531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.000719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.000748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.000847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.000876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.000990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.001020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.001206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.001262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.001507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.001537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.001647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.001676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.001938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.001966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.002073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.002102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.002217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.002256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.002499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.002528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.002738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.002768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.002878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.002907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.003113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.003143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.003327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.003358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.003469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.003498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.003620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.003650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.003770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.003800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.004042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.004077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.004261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.004291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.004472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.004501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.004764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.004793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.005054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.005083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.005223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.005259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.005499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.005528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.005729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.005759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.005957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.005986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.006197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.006232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.006428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.006458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.006587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.006616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.006853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.006882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.674 qpair failed and we were unable to recover it. 00:35:46.674 [2024-07-13 01:00:58.006988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.674 [2024-07-13 01:00:58.007017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.007221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.007260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.007385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.007415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.007674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.007702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.007873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.007902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.008080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.008110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.008294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.008324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.008449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.008478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.008663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.008692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.008889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.008918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.009098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.009128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.009304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.009334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.009540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.009569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.009746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.009775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.010015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.010050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.010172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.010201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.010392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.010422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.010539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.010569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.010678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.010707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.011000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.011029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.011150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.011179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.011452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.011482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.011603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.011632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.011895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.011924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.012111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.012140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.012354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.012384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.012517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.012546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.012679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.012709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.012903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.012933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.013112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.013141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.013254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.013284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.013474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.013504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.013695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.013724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.013844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.013873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.014056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.014085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.014321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.014351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.014545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.014574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.014685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.014715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.014888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.014917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.015036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.015065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.675 qpair failed and we were unable to recover it. 00:35:46.675 [2024-07-13 01:00:58.015243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.675 [2024-07-13 01:00:58.015273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.015457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.015491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.015677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.015706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.015880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.015909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.016122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.016151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.016390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.016420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.016552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.016582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.016764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.016794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.016960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.016989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.017256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.017287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.017550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.017579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.017788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.017817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.018054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.018083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.018267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.018296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.018494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.018523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.018660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.018689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.018927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.018957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.019167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.019197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.019440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.019470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.019656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.019686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.019950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.019979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.020083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.020113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.020287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.020317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.020525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.020555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.020820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.020849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.020991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.021021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.021209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.021266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.021387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.021416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.021602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.021631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.021747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.021777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.021982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.022011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.022198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.022236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.022356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.022386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.022557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.022586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.022792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.022821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.022958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.022988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.023106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.023136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.023309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.023339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.023439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.023468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.023636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.023666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.023853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.023882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.676 [2024-07-13 01:00:58.024121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.676 [2024-07-13 01:00:58.024150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.676 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.024272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.024303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.024495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.024525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.024692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.024721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.024835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.024864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.024980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.025009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.025278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.025309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.025479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.025508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.025730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.025759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.026021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.026050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.026291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.026321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.026578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.026608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.026858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.026888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.027075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.027104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.027297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.027327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.027437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.027467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.027648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.027676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.027866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.027895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.028104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.028134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.028246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.028277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.028462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.028492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.028674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.028703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.028915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.028945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.029140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.029169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.029369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.029400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.029538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.029567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.029804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.029833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.030009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.030038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.030222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.030264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.030369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.030398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.030632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.030662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.030899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.030929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.031110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.031139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.031325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.031355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.031496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.031526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.031710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.031739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.031879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.031908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.032077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.032106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.032286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.032316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.032492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.032522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.032704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.032734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.677 [2024-07-13 01:00:58.032949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.677 [2024-07-13 01:00:58.032978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.677 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.033218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.033257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.033520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.033549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.033762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.033791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.033918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.033948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.034183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.034212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.034341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.034371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.034636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.034664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.034847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.034876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.035050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.035079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.035201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.035238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.035415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.035444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.035630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.035659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.035923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.035952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.036128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.036162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.036288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.036319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.036490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.036520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.036779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.036808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.037056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.037085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.037345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.037376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.037641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.037670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.037904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.037933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.038170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.038199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.038425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.038455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.038628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.038658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.038843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.038872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.038987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.039016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.039141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.039170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.039307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.039337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.039528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.039558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.039692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.039721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.039852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.039881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.039992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.040021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.040208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.040260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.040505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.040534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.040646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.040675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.040792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.040820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.041015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.041044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.041171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.041200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.041381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.041410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.041542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.041571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.041745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.678 [2024-07-13 01:00:58.041779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.678 qpair failed and we were unable to recover it. 00:35:46.678 [2024-07-13 01:00:58.041882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-07-13 01:00:58.041911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-07-13 01:00:58.042200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-07-13 01:00:58.042239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-07-13 01:00:58.042371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-07-13 01:00:58.042400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-07-13 01:00:58.042592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-07-13 01:00:58.042622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-07-13 01:00:58.042805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-07-13 01:00:58.042835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-07-13 01:00:58.043016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-07-13 01:00:58.043045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-07-13 01:00:58.043280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-07-13 01:00:58.043310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.679 qpair failed and we were unable to recover it. 00:35:46.679 [2024-07-13 01:00:58.043599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.679 [2024-07-13 01:00:58.043629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.043864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.043893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.044077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.044106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.044299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.044330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.044591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.044620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.044795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.044824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.044954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.044983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.045219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.045257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.045392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.045422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.045620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.045650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.045821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.045850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.046098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.046127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.046298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.046327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.046527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.046556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.046742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.046771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.047033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.047062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.047274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.047304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.047547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.047576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.047714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.047743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.047878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.047908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.048162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.048192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.048420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.048452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.048589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.048618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.048821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.048850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.049088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.049118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.049360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.049390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.049563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.049592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.049715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.049743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.049862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.049891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.050148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.050177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.050357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.050387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.050645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.050674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.050875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.050905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.680 [2024-07-13 01:00:58.051097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.680 [2024-07-13 01:00:58.051135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.680 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.051347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.051377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.051503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.051532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.051704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.051733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.051908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.051937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.052050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.052079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.052184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.052213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.052330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.052360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.052555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.052584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.052780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.052808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.052943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.052972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.053189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.053218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.053487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.053517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.053754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.053783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.053904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.053933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.054118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.054147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.054344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.054375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.054511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.054540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.054711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.054740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.055002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.055031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.055245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.055275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.055395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.055424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.055528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.055557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.055741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.055770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.055981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.056010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.056190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.056218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.056410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.056440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.056629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.056664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.056796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.056825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.057010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.057040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.057281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.057311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.057502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.057531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.057720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.057748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.058024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.058054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.058294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.058324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.058499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.058528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.058796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.058825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.059021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.059050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.059312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.059342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.059470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.059499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.059738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.059767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.681 [2024-07-13 01:00:58.059964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.681 [2024-07-13 01:00:58.059993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.681 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.060241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.060271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.060402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.060431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.060697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.060726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.060984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.061013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.061248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.061278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.061461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.061491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.061744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.061773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.061960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.061989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.062243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.062273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.062473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.062502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.062705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.062735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.062991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.063021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.063153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.063187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.063318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.063348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.063543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.063572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.063691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.063720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.063957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.063987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.064106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.064135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.064335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.064365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.064505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.064534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.064721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.064750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.065001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.065031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.065236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.065267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.065515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.065545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.065666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.065694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.065935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.065964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.066159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.066189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.066374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.066403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.066573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.066602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.066793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.066822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.066994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.067023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.067194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.067234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.067517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.067547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.067805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.067834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.068030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.068059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.068264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.068295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.068426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.068456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.068716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.068746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.068855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.068884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.069066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.069095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.069340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.682 [2024-07-13 01:00:58.069370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.682 qpair failed and we were unable to recover it. 00:35:46.682 [2024-07-13 01:00:58.069486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.069515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.069633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.069662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.069846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.069875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.069990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.070019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.070261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.070291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.070407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.070436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.070606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.070635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.070903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.070932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.071072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.071101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.071357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.071387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.071632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.071662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.071854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.071883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.072062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.072092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.072303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.072333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.072594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.072623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.072831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.072861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.073093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.073122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.073269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.073300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.073561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.073591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.073703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.073733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.073853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.073882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.074139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.074168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.074426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.074457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.074641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.074670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.074844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.074874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.075003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.075032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.075277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.075308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.075501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.075530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.075720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.075749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.075932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.075961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.076093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.076122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.076382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.076413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.076601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.076630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.076740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.076769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.076966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.076995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.077258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.077288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.077406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.077435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.077551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.077580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.077831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.077861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.078031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.078066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.078333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.078364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.683 qpair failed and we were unable to recover it. 00:35:46.683 [2024-07-13 01:00:58.078553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.683 [2024-07-13 01:00:58.078583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.078700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.078729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.078990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.079019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.079201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.079237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.079410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.079440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.079627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.079657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.079897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.079926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.080114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.080143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.080324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.080355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.080524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.080553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.080740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.080770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.080883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.080912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.081182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.081211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.081407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.081437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.081623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.081653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.081758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.081787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.081976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.082005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.082269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.082299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.082420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.082450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.082643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.082673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.082810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.082840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.082978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.083007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.083189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.083218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.083441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.083471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.083677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.083707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.083837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.083872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.084050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.084079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.084266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.084297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.084467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.084496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.084684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.084714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.084848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.084878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.085006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.085036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.085233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.085263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.085431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.085460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.085708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.085738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.085870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.684 [2024-07-13 01:00:58.085900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.684 qpair failed and we were unable to recover it. 00:35:46.684 [2024-07-13 01:00:58.086011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.086040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.086248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.086277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.086569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.086598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.086716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.086746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.086956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.086985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.087178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.087207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.087389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.087419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.087674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.087704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.087810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.087840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.088018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.088048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.088311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.088342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.088533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.088562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.088746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.088775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.088881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.088910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.089035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.089064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.089247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.089277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.089381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.089410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.089593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.089622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.089812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.089841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.090107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.090137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.090275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.090305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.090492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.090521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.090721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.090750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.090949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.090978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.091149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.091178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.091372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.091401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.091663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.091691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.091930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.091959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.092080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.092109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.092297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.092328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.092587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.092617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.092752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.092781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.092989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.093019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.093133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.093162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.093373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.093403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.093593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.093622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.093808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.093837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.093962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.093992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.094242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.094272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.094509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.094538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.094679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.094708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.685 qpair failed and we were unable to recover it. 00:35:46.685 [2024-07-13 01:00:58.094832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.685 [2024-07-13 01:00:58.094861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.095030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.095059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.095252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.095282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.095549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.095579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.095858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.095887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.096062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.096091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.096249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.096280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.096607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.096636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.096875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.096904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.097019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.097047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.097171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.097200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.097381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.097411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.097551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.097580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.097750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.097780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.097901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.097931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.098116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.098146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.098311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.098347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.098525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.098555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.098737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.098766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.099027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.099056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.099237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.099268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.099454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.099482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.099649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.099678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.099868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.099897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.100019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.100049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.100221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.100258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.100371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.100400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.100598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.100627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.100864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.100894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.101156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.101185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.101382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.101412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.101681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.101711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.101825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.101855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.102048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.102078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.102197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.102235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.102368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.102397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.102634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.102663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.102901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.102930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.103062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.103091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.103213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.103253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.103441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.103470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.686 [2024-07-13 01:00:58.103650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.686 [2024-07-13 01:00:58.103680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.686 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.103886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.103916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.104046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.104081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.104265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.104295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.104470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.104499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.104609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.104638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.104886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.104915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.105036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.105066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.105179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.105208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.105394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.105424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.105662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.105692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.105925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.105955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.106074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.106104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.106299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.106328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.106499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.106528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.106649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.106678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.106884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.106913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.107092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.107121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.107320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.107349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.107609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.107638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.107810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.107840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.108117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.108147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.108318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.108348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.108481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.108510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.108734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.108763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.108947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.108976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.109242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.109272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.109394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.109423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.109529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.109558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.109757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.109792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.109979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.110008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.110207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.110244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.110424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.110453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.110675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.110704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.110897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.110926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.111111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.111141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.111328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.111359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.111597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.111626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.111753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.111782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.112040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.112069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.112259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.112289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.112463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.687 [2024-07-13 01:00:58.112492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.687 qpair failed and we were unable to recover it. 00:35:46.687 [2024-07-13 01:00:58.112755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.112784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.112923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.112952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.113077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.113107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.113301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.113330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.113514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.113543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.113717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.113747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.113919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.113948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.114082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.114111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.114295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.114324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.114510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.114539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.114724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.114753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.114943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.114972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.115079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.115108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.115353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.115382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.115618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.115647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.115866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.115895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.116008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.116037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.116274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.116305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.116494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.116524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.116761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.116791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.117001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.117030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.117210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.117246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.117490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.117519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.117701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.117730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.117918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.117947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.118143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.118172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.118347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.118376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.118490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.118520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.118776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.118843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.119104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.119136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.119393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.119425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.119615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.119645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.119786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.119815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.120028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.120058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.120185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.120214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.688 [2024-07-13 01:00:58.120356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.688 [2024-07-13 01:00:58.120386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.688 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.120517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.120546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.120680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.120710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.120838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.120868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.120973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.121003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.121180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.121209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.121346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.121391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.121572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.121602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.121780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.121810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.121928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.121957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.122093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.122122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.122317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.122347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.122464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.122493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.122675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.122705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.122873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.122901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.123075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.123105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.123278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.123308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.123492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.123522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.123700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.123731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.123846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.123875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.124132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.124161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.124341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.124371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.124560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.124589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.124704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.124734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.124910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.124939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.125049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.125079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.125342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.125373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.125543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.125572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.125770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.125799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.126036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.126066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.126192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.126221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.126415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.126445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.126562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.126591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.126905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.126936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.127117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.127147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.127316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.127347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.127489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.127519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.127722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.127751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.127870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.127899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.128163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.128192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.128469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.128500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.689 qpair failed and we were unable to recover it. 00:35:46.689 [2024-07-13 01:00:58.128689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.689 [2024-07-13 01:00:58.128718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.128970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.128999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.129172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.129201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.129448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.129478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.129742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.129772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.129953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.129982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.130176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.130206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.130400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.130430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.130552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.130582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.130866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.130895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.131135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.131165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.131381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.131410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.131636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.131666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.131847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.131877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.132050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.132079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.132254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.132285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.132504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.132533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.132744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.132774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.132987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.133017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.133244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.133276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.133464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.133494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.133665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.133694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.133816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.133846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.134037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.134066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.134248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.134278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.134403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.134432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.134556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.134586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.134781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.134811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.134986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.135015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.135140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.135169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.135317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.135347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.135545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.135574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.135762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.135797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.135989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.136019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.136258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.136289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.136464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.136493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.136610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.136640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.136754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.136784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.136968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.136997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.137170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.137199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.137393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.690 [2024-07-13 01:00:58.137423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.690 qpair failed and we were unable to recover it. 00:35:46.690 [2024-07-13 01:00:58.137543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.137573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.137775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.137804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.137921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.137950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.138060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.138089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.138213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.138250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.138424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.138454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.138669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.138698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.138960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.138989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.139195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.139233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.139415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.139445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.139709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.139739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.139853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.139883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.140073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.140102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.140283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.140314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.140433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.140462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.140633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.140662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.140860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.140890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.141060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.141090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.141202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.141240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.141427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.141456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.141581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.141610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.141804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.141834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.142007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.142036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.142275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.142305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.142499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.142529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.142720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.142750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.143016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.143045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.143338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.143368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.143603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.143633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.143844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.143873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.144061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.144091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.144364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.144399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.144662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.144692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.144806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.144834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.145086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.145115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.145239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.145270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.145413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.145443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.145711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.145741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.145912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.145941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.146115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.146144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.146318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.691 [2024-07-13 01:00:58.146348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.691 qpair failed and we were unable to recover it. 00:35:46.691 [2024-07-13 01:00:58.146530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.146560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.146746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.146775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.146906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.146935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.147124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.147153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.147419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.147449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.147640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.147669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.147859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.147887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.148124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.148154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.148349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.148381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.148565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.148595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.148784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.148814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.149054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.149083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.149280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.149310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.149497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.149526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.149651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.149680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.149943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.149972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.150212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.150249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.150441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.150471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.150755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.150784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.150901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.150931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.151049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.151078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.151365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.151395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.151535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.151564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.151698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.151727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.151962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.151992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.152131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.152161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.152331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.152362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.152559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.152588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.152773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.152802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.152993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.153022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.153263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.153299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.153473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.153503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.153617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.153646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.153909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.153938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.154056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.154085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.154219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.692 [2024-07-13 01:00:58.154256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.692 qpair failed and we were unable to recover it. 00:35:46.692 [2024-07-13 01:00:58.154526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.154556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.154762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.154791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.154980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.155009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.155105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.155134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.155330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.155360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.155527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.155556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.155742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.155771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.156009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.156038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.156279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.156311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.156577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.156607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.156787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.156817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.156950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.156979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.157243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.157273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.157405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.157434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.157627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.157656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.157940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.157969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.158151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.158181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.158370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.158401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.158584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.158614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.158869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.158898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.159011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.159040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.159245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.159275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.159465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.159494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.159751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.159781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.160009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.160039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.160243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.160275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.160541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.160570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.160693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.160722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.160990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.161020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.161282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.161312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.161486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.161515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.161699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.161728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.161969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.161998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.162262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.162292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.162531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.162566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.162835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.162864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.163129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.163159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.163364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.163394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.163591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.163621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.163802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.163831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.693 qpair failed and we were unable to recover it. 00:35:46.693 [2024-07-13 01:00:58.163958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.693 [2024-07-13 01:00:58.163987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.164169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.164198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.164415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.164446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.164686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.164716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.164953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.164982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.165217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.165257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.165447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.165476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.165713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.165741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.165989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.166018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.166303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.166334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.166503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.166532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.166791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.166819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.166998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.167028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.167266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.167295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.167530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.167559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.167741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.167771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.167871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.167900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.168039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.168068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.168324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.168367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.168627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.168656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.168940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.168969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.169166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.169195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.169378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.169408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.169668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.169698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.169883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.169912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.170032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.170061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.170239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.170270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.170400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.170429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.170707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.170736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.170855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.170885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.171127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.171156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.171295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.171325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.171455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.171484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.171611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.171640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.171880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.171915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.172110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.172139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.172324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.172355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.172545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.172575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.172744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.172773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.172952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.172982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.173116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.694 [2024-07-13 01:00:58.173145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.694 qpair failed and we were unable to recover it. 00:35:46.694 [2024-07-13 01:00:58.173386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.173416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.173677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.173706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.173969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.173998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.174117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.174147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.174407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.174437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.174565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.174594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.174779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.174809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.175099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.175129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.175252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.175282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.175522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.175551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.175736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.175765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.175941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.175971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.176092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.176122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.176245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.176275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.176397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.176427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.176636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.176664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.176780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.176809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.177007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.177036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.177247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.177277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.177517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.177546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.177760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.177790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.177925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.177955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.178136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.178165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.178404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.178434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.178609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.178639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.178823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.178852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.178971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.179000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.179200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.179250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.179391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.179420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.179605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.179634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.179816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.179845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.180033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.180062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.180324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.180356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.180468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.180503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.180621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.180650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.180913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.180942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.181065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.181095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.181278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.181309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.181498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.181527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.181711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.181740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.181931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.695 [2024-07-13 01:00:58.181960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.695 qpair failed and we were unable to recover it. 00:35:46.695 [2024-07-13 01:00:58.182153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.182182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.182452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.182482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.182607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.182636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.182762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.182791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.182985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.183015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.183255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.183285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.183462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.183492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.183617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.183646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.183906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.183935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.184172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.184201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.184412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.184442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.184564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.184593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.184781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.184811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.184923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.184952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.185189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.185218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.185351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.185381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.185512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.185542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.185728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.185758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.185968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.185997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.186129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.186158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.186366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.186396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.186587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.186616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.186796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.186825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.186999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.187028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.187212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.187248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.187417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.187447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.187622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.187651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.187837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.187866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.187999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.188028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.188218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.188255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.188432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.188462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.188640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.188669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.188785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.188819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.189035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.189064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.189270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.189300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.189420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.189450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.189565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.189594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.189771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.189800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.189923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.189952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.190064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.190093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.190300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.696 [2024-07-13 01:00:58.190330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.696 qpair failed and we were unable to recover it. 00:35:46.696 [2024-07-13 01:00:58.190571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.190599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.190864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.190893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.191018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.191047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.191255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.191285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.191453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.191482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.191726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.191756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.191936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.191965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.192137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.192166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.192406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.192436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.192561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.192589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.192754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.192783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.192969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.192998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.193286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.193316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.193438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.193468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.193681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.193710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.193975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.194004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.194211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.194251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.194432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.194460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.194599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.194628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.697 [2024-07-13 01:00:58.194752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.697 [2024-07-13 01:00:58.194781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.697 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.194905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.194934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.195114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.195144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.195316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.195347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.195541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.195570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.195755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.195784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.196064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.196093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.196335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.196365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.196601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.196630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.196883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.196912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.197119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.197149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.197265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.197298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.197564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.197599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.197805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.197834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.198005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.198035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.198168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.198198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.198471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.198501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.198621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.198650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.198821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.198850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.198974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.199003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.199132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.199161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.199299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.199330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.199569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.199598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.199726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.199755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.199934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.199963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.200203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.200240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.200418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.977 [2024-07-13 01:00:58.200447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.977 qpair failed and we were unable to recover it. 00:35:46.977 [2024-07-13 01:00:58.200659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.200688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.200865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.200895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.201082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.201111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.201304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.201333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.201520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.201550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.201786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.201815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.201935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.201964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.202155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.202184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.202375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.202405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.202617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.202647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.202835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.202864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.203107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.203136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.203331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.203363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.203620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.203649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.203833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.203863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.204062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.204091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.204292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.204322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.204450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.204480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.204717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.204746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.204932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.204962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.205100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.205130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.205311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.205342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.205537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.205566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.205703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.205731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.205853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.205882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.206066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.206100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.206285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.206316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.206486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.206516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.206698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.206727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.206916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.206945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.207057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.207086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.207204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.207243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.207511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.207540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.207728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.207757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.207893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.207922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.208098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.208128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.208392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.208422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.208559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.208588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.208802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.208831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.209008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.978 [2024-07-13 01:00:58.209038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.978 qpair failed and we were unable to recover it. 00:35:46.978 [2024-07-13 01:00:58.209167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.209196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.209389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.209418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.209611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.209640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.209842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.209871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.210005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.210034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.210216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.210253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.210425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.210454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.210667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.210697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.210893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.210922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.211113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.211142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.211264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.211294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.211501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.211530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.211717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.211747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.211932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.211962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.212153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.212183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.212453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.212484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.212603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.212632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.212814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.212843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.212955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.212984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.213104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.213134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.213254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.213285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.213422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.213451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.213689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.213719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.213907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.213936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.214053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.214082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.214267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.214302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.214479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.214509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.214710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.214739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.214975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.215003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.215136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.215166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.215349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.215379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.215563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.215592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.215774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.215803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.215936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.215965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.216180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.216209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.216323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.216353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.216487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.216516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.216708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.216737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.217020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.217048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.217166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.217196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.217332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.979 [2024-07-13 01:00:58.217362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.979 qpair failed and we were unable to recover it. 00:35:46.979 [2024-07-13 01:00:58.217648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.217677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.217914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.217943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.218207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.218247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.218388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.218418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.218630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.218660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.218917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.218946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.219234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.219265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.219477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.219507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.219760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.219790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.219912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.219941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.220121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.220151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.220345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.220376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.220477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.220505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.220676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.220705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.220894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.220924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.221051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.221079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.221200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.221237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.221415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.221444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.221700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.221729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.221867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.221897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.222077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.222107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.222292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.222322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.222495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.222524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.222641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.222670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.222869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.222903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.223003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.223032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.223293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.223323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.223506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.223535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.223725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.223754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.223948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.223977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.224159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.224188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.224473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.224504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.224755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.224784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.224887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.224917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.225160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.225189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.225405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.225435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.225573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.225602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.225838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.225867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.226052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.226082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.226334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.980 [2024-07-13 01:00:58.226366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.980 qpair failed and we were unable to recover it. 00:35:46.980 [2024-07-13 01:00:58.226474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.226503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.226743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.226772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.226897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.226926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.227129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.227159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.227332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.227362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.227597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.227626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.227814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.227844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.228013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.228042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.228236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.228266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.228453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.228483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.228669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.228699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.228892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.228921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.229042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.229070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.229189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.229219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.229349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.229378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.229568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.229597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.229703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.229732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.229835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.229864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.230052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.230082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.230322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.230352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.230589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.230618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.230750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.230779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.230958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.230987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.231109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.231139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.231342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.231377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.231616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.231645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.231760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.231790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.231912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.231942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.232126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.232155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.232406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.232436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.232540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.232570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.232745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.232775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.232947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.232976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.981 qpair failed and we were unable to recover it. 00:35:46.981 [2024-07-13 01:00:58.233158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.981 [2024-07-13 01:00:58.233187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.233405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.233435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.233697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.233726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.233970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.233999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.234253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.234283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.234473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.234503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.234622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.234651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.234836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.234865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.235055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.235084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.235291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.235321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.235501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.235530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.235662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.235691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.235979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.236007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.236122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.236150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.236363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.236392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.236632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.236661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.236911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.236941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.237073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.237103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.237319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.237350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.237485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.237514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.237639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.237671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.237800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.237830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.238013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.238043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.238176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.238206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.238334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.238364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.238564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.238600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.238787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.238816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.238949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.238978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.239095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.239124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.239300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.239330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.239457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.239488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.239597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.239627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.239758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.239788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.239997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.240027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.240211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.240249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.240360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.240389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.240568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.240598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.240706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.240736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.240860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.240889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.241005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.241034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.241211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.982 [2024-07-13 01:00:58.241269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.982 qpair failed and we were unable to recover it. 00:35:46.982 [2024-07-13 01:00:58.241454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.241485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.241622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.241652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.241831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.241860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.241987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.242016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.242134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.242164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.242387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.242419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.242596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.242625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.242849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.242879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.243070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.243099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.243365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.243397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.243638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.243667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.243848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.243877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.244013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.244043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.244170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.244199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.244445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.244476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.244654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.244684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.244866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.244895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.245078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.245113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.245299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.245330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.245446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.245476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.245665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.245694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.245808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.245837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.246021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.246050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.246163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.246197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.246393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.246423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.246630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.246660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.246841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.246871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.246991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.247020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.247261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.247292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.247506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.247536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.247720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.247750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.247952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.247982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.248235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.248266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.248389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.248418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.248603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.248632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.248890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.248919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.249165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.249193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.249472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.249502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.249630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.249660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.249845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.249874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.250112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.983 [2024-07-13 01:00:58.250141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.983 qpair failed and we were unable to recover it. 00:35:46.983 [2024-07-13 01:00:58.250336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.250366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.250557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.250585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.250759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.250788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.250979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.251009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.251119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.251148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.251286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.251317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.251600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.251629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.251805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.251834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.251963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.251992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.252103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.252133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.252242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.252272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.252403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.252432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.252605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.252634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.252901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.252930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.253130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.253160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.253335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.253366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.253478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.253512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.253706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.253735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.253847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.253877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.254013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.254042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.254216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.254263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.254432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.254462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.254731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.254761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.254886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.254915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.255087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.255116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.255307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.255338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.255455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.255485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.255630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.255659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.255898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.255927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.256099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.256129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.256260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.256291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.256420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.256451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.256566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.256595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.256797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.256826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.256944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.256973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.257175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.257204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.257422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.257456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.257579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.257608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.257801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.257831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.257948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.257977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.984 [2024-07-13 01:00:58.258253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.984 [2024-07-13 01:00:58.258289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.984 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.258397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.258430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.258548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.258576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.258705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.258734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.258932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.258961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.259199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.259245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.259471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.259503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.259704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.259734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.259934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.259963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.260095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.260127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.260252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.260285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.260475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.260505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.260621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.260650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.260764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.260792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.260990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.261023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.261212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.261252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.261435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.261470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.261741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.261770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.261945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.261978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.262238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.262271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.262404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.262434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.262611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.262640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.262843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.262878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.263008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.263048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.263299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.263329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.263523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.263552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.263738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.263767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.263982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.264014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.264279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.264312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.264495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.264524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.264709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.264741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.264853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.264888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.265004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.265034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.265219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.265269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.265443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.265473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.265588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.985 [2024-07-13 01:00:58.265617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.985 qpair failed and we were unable to recover it. 00:35:46.985 [2024-07-13 01:00:58.265871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.265904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.266078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.266109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.266354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.266385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.266572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.266602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.266793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.266823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.267059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.267089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.267335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.267366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.267493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.267523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.267632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.267662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.267929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.267958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.268141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.268170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.268354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.268385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.268507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.268537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.268656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.268686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.268924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.268954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.269061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.269091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.269212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.269250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.269515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.269545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.269686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.269716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.269913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.269943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.270189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.270223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.270350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.270379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.270563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.270593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.270777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.270806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.270976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.271005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.271192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.271221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.271353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.271383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.271573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.271602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.271724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.271754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.271938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.271967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.272089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.272117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.272307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.272338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.272612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.272642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.272817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.272846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.273064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.273097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.273276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.273307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.273415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.273444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.273624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.273653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.273857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.273886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.274071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.274104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.274280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.986 [2024-07-13 01:00:58.274311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.986 qpair failed and we were unable to recover it. 00:35:46.986 [2024-07-13 01:00:58.274484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.274513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.274779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.274808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.275002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.275033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.275286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.275316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.275536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.275566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.275762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.275791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.275913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.275943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.276067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.276097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.276239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.276270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.276575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.276604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.276798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.276829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.277010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.277039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.277160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.277189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.277476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.277507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.277739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.277773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.277893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.277922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.278091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.278120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.278303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.278334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.278453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.278483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.278607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.278643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.278822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.278851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.279040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.279069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.279315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.279346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.279530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.279565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.279692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.279721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.279841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.279870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.279990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.280019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.280132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.280161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.280361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.280391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.280515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.280545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.280658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.280687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.280925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.280954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.281065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.281094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.281222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.281271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.281384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.281414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.281537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.281567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.281753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.281782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.281883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.281912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.282036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.282066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.282246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.282281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.987 [2024-07-13 01:00:58.282414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.987 [2024-07-13 01:00:58.282444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.987 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.282625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.282654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.282831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.282860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.282985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.283014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.283119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.283147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.283354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.283385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.283512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.283541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.283781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.283810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.283940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.283969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.284100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.284129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.284256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.284286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.284402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.284431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.284554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.284584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.284728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.284757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.284864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.284892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.285095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.285125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.285259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.285289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.285549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.285579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.285771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.285800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.285930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.285964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.286157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.286186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.286438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.286467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.286655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.286684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.286869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.286898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.287034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.287063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.287271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.287302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.287573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.287602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.287774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.287803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.287972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.288001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.288119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.288148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.288343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.288373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.288500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.288529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.288765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.288794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.289040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.289069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.289192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.289220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.289416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.289446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.289553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.289582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.289694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.289723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.289905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.289934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.290068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.290097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.290340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.290371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.988 qpair failed and we were unable to recover it. 00:35:46.988 [2024-07-13 01:00:58.290544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.988 [2024-07-13 01:00:58.290573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.290682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.290712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.290897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.290926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.291117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.291145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.291361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.291392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.291590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.291619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.291892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.291922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.292124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.292153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.292282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.292312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.292577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.292606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.292802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.292832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.293107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.293136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.293345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.293376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.293502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.293537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.293733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.293762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.293981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.294010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.294179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.294208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.294464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.294495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.294606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.294641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.294830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.294859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.295046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.295075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.295202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.295256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.295461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.295491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.295672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.295702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.295882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.295912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.296105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.296134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.296311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.296343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.296446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.296475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.296599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.296627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.296914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.296943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.297084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.297113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.297279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.297310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.297445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.297475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.297669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.297700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.297898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.297928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.298169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.298210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.298467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.298499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.298629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.298658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.298914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.298944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.299130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.299163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.299339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.989 [2024-07-13 01:00:58.299371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.989 qpair failed and we were unable to recover it. 00:35:46.989 [2024-07-13 01:00:58.299550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.299580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.299774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.299803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.299924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.299953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.300147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.300176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.300304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.300335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.300459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.300488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.300661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.300690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.300812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.300841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.301017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.301046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.301245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.301276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.301460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.301490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.301684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.301714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.301894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.301924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.302026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.302056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.302320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.302351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.302558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.302587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.302778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.302807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.302917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.302951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.303138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.303166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.303448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.303478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.303694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.303724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.303918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.303948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.304137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.304165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.304345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.304378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.304614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.304644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.304824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.304855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.304991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.305021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.305191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.305220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.305354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.305384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.305499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.305528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.305662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.305693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.305898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.305927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.306115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.306144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.306319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.306349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.306632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.306663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.990 qpair failed and we were unable to recover it. 00:35:46.990 [2024-07-13 01:00:58.306837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.990 [2024-07-13 01:00:58.306867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.306975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.307004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.307210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.307250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.307446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.307476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.307686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.307715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.307834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.307863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.308047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.308076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.308302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.308332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.308515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.308545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.308661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.308691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.308831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.308861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.309004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.309033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.309152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.309181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.309405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.309441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.309569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.309598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.309779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.309809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.310046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.310077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.310283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.310318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.310433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.310462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.310653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.310682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.310809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.310839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.310955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.310984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.311095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.311130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.311310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.311341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.311454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.311483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.311671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.311701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.311829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.311858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.311998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.312027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.312126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.312155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.312276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.312307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.312448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.312478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.312672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.312701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.312875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.312904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.313072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.313102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.313283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.313313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.313514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.313544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.313669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.313698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.313934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.313962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.314161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.314190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.314313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.314344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.314469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.991 [2024-07-13 01:00:58.314497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.991 qpair failed and we were unable to recover it. 00:35:46.991 [2024-07-13 01:00:58.314670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.314699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.314887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.314917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.315098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.315127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.315319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.315350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.315458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.315487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.315753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.315782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.315961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.315989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.316195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.316235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.316440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.316470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.316611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.316641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.316753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.316782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.316967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.316996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.317189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.317218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.317481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.317511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.317685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.317715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.317808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.317836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.317966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.317995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.318213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.318251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.318451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.318480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.318650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.318679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.318801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.318829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.319020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.319054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.319247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.319278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.319387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.319418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.319674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.319703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.319885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.319913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.320092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.320123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.320251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.320281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.320411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.320440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.320643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.320673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.320935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.320964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.321086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.321115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.321244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.321276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.321394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.321423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.321554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.321583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.321693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.321723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.321837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.321866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.322058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.322087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.322192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.322222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.322344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.322374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.322641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.992 [2024-07-13 01:00:58.322671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.992 qpair failed and we were unable to recover it. 00:35:46.992 [2024-07-13 01:00:58.322804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.322833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.322958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.322987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.323106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.323135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.323271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.323302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.323564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.323593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.323835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.323864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.324000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.324030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.324146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.324176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.324306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.324337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.324575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.324604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.324810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.324839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.324967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.324997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.325183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.325212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.325512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.325541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.325657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.325686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.325876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.325911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.326035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.326064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.326200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.326246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.326449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.326478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.326689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.326718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.326915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.326950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.327151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.327179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.327326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.327357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.327588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.327617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.327813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.327842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.328019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.328048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.328312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.328342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.328534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.328563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.328780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.328810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.328999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.329028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.329151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.329180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.329462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.329492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.329668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.329697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.329907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.329937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.330078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.330107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.330235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.330265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.330440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.330469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.330711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.330740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.330927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.330956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.331071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.331100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.331292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.993 [2024-07-13 01:00:58.331322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.993 qpair failed and we were unable to recover it. 00:35:46.993 [2024-07-13 01:00:58.331438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.331466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.331578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.331608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.331849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.331877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.332075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.332104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.332300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.332330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.332528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.332558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.332837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.332866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.333048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.333077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.333277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.333308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.333546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.333575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.333776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.333805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.333937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.333967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.334102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.334131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.334372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.334403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.334518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.334547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.334819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.334848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.335072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.335101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.335298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.335329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.335527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.335557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.335792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.335826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.335930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.335957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.336139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.336170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.336350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.336380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.336495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.336524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.336636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.336665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.336914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.336943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.337113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.337144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.337257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.337288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.337469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.337498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.337618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.337648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.337887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.337917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.338100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.338130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.338370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.338401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.338580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.994 [2024-07-13 01:00:58.338610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.994 qpair failed and we were unable to recover it. 00:35:46.994 [2024-07-13 01:00:58.338733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.338761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.338951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.338981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.339197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.339236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.339361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.339391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.339590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.339619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.339815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.339845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.340023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.340052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.340256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.340287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.340481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.340511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.340709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.340738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.340984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.341014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.341249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.341280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.341408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.341437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.341623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.341652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.341839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.341869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.341986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.342015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.342193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.342222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.342348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.342377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.342485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.342514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.342713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.342742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.343029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.343058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.343249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.343279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.343455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.343484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.343610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.343639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.343814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.343843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.343963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.343993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.344181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.344210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.344388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.344417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.344532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.344561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.344737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.344767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.344989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.345018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.345137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.345166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.345291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.345322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.345502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.345532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.345746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.345775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.346048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.346077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.346182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.346217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.346413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.346443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.346614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.346643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.346857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.995 [2024-07-13 01:00:58.346886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.995 qpair failed and we were unable to recover it. 00:35:46.995 [2024-07-13 01:00:58.347069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.347098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.347248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.347281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.347457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.347486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.347617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.347646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.347852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.347882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.348052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.348082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.348313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.348345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.348612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.348641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.348844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.348873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.349042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.349073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.349276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.349306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.349435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.349464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.349589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.349624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.349860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.349890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.350132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.350162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.350361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.350391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.350583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.350613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.350789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.350818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.351057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.351087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.351211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.351250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.351422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.351451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.351646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.351676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.351886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.351915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.352052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.352082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.352287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.352318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.352629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.352659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.352795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.352825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.352947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.352977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.353192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.353222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.353365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.353395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.353524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.353553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.353686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.353716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.353833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.353868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.353976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.354006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.354116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.354145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.354340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.354370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.354545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.354575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.354683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.354712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.354888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.354918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.996 [2024-07-13 01:00:58.355133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.996 [2024-07-13 01:00:58.355163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.996 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.355412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.355442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.355576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.355605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.355741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.355771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.355942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.355971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.356091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.356120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.356397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.356427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.356551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.356581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.356700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.356729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.356989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.357018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.357280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.357310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.357435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.357464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.357652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.357681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.357881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.357915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.358102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.358131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.358257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.358287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.358559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.358589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.358713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.358742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.358935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.358964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.359212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.359250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.359445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.359475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.359656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.359686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.359866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.359896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.360019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.360047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.360163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.360194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.360414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.360445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.360563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.360592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.360888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.360918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.361043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.361072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.361211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.361252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.361430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.361459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.361668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.361697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.361926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.361955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.362144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.362175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.362423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.362453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.362569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.362598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.362794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.362823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.363012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.363041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.363217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.363255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.363543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.363572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.363708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.997 [2024-07-13 01:00:58.363737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.997 qpair failed and we were unable to recover it. 00:35:46.997 [2024-07-13 01:00:58.363872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.363901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.364086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.364116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.364246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.364276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.364467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.364496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.364616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.364645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.364761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.364790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.364986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.365018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.365156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.365185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.365296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.365327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.365457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.365487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.365604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.365633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.365747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.365776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.365884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.365918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.366133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.366162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.366335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.366365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.366488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.366517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.366760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.366789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.366987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.367016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.367193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.367222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.367429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.367458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.367632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.367661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.367851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.367880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.367996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.368025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.368287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.368318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.368512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.368540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.368713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.368742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.368928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.368958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.369151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.369180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.369398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.369428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.369672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.369701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.369958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.369987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.370201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.370237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.370354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.370384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.370504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.370534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.998 [2024-07-13 01:00:58.370724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.998 [2024-07-13 01:00:58.370753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.998 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.370884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.370912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.371220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.371256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.371438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.371468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.371597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.371626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.371883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.371912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.372130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.372159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.372368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.372399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.372534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.372563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.372674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.372704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.372875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.372904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.373077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.373106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.373300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.373331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.373503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.373531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.373640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.373670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.373785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.373814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.374080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.374110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.374300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.374331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.374567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.374606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.374737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.374766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.375009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.375040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.375167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.375196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.375410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.375440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.375654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.375683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.375868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.375897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.376145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.376175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.376322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.376364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.376587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.376617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.376732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.376761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.376977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.377006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.377193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.377222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.377354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.377384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.377582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.377612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.377809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.377840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.378014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.378044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.378235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.378266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.378481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.378510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.378629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.378658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.378897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.378926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.379051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.379081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.379254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.379284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:46.999 qpair failed and we were unable to recover it. 00:35:46.999 [2024-07-13 01:00:58.379405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.999 [2024-07-13 01:00:58.379434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.379619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.379648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.379772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.379801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.379978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.380007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.380277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.380308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.380493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.380523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.380777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.380806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.381052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.381081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.381270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.381301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.381485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.381514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.381780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.381809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.382123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.382152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.382271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.382300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.382416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.382446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.382635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.382663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.382851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.382879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.383119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.383148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.383344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.383379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.383625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.383656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.383785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.383814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.384036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.384064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.384329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.384360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.384502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.384531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.384709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.384739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.384853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.384882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.385074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.385103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.385308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.385339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.385628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.385658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.385844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.385874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.386047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.386076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.386251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.386283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.386511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.386540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.386663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.386692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.386808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.386837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.387041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.387071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.387185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.387215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.387361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.387391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.387566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.387595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.387837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.387866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.388080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.388117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.388311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.000 [2024-07-13 01:00:58.388342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.000 qpair failed and we were unable to recover it. 00:35:47.000 [2024-07-13 01:00:58.388464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.388493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.388697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.388729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.388836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.388872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.389153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.389182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.389378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.389408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.389595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.389624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.389816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.389845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.390021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.390051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.390308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.390338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.390646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.390676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.390862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.390891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.391082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.391111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.391241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.391272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.391472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.391502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.391701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.391730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.391972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.392002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.392115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.392150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.392351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.392382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.392511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.392540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.392662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.392691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.392900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.392929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.393051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.393080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.393275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.393305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.393416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.393446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.393642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.393672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.393843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.393872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.394113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.394142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.394327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.394362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.394538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.394567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.394682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.394712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.394841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.394869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.395101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.395131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.395336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.395366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.395541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.395570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.395706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.395736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.395866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.395895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.396090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.396119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.396346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.396376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.396502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.396533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.396707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.396737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.396946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.001 [2024-07-13 01:00:58.396976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.001 qpair failed and we were unable to recover it. 00:35:47.001 [2024-07-13 01:00:58.397170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.397199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.397498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.397528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.397743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.397773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.397969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.397998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.398191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.398220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.398368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.398400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.398528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.398557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.398744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.398772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.398944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.398974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.399075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.399104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.399235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.399265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.399455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.399485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.399592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.399621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.399793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.399822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.400007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.400036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.400158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.400194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.400381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.400412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.400549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.400579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.400695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.400724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.400899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.400929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.401170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.401200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.401404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.401435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.401650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.401678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.401807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.401836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.402029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.402059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.402297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.402327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.402594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.402623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.402843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.402873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.403151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.403181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.403402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.403432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.403608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.403637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.403842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.403872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.404112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.404141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.404330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.404361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.404484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.404514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.404692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.404723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.404909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.404938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.405052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.405080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.405264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.405295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.405468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.405497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.002 [2024-07-13 01:00:58.405618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.002 [2024-07-13 01:00:58.405649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.002 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.405767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.405796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.406064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.406094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.406238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.406269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.406395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.406424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.406615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.406645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.406824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.406853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.406979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.407009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.407196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.407235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.407486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.407517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.407704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.407734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.407994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.408024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.408143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.408173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.408442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.408473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.408616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.408645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.408884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.408919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.409110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.409139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.409324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.409356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.409573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.409603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.409881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.409910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.410172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.410207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.410343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.410373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.410592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.410621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.410820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.410849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.411054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.411083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.411276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.411307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.411553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.411582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.411772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.411801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.412089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.412120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.412321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.412351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.412596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.412626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.412820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.412850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.412970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.413004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.413138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.413167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.413353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.413383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.413647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.413677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.003 [2024-07-13 01:00:58.413868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.003 [2024-07-13 01:00:58.413900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.003 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.414087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.414116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.414382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.414413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.414590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.414619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.414831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.414862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.415105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.415135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.415287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.415318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.415447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.415477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.415606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.415635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.415841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.415872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.416076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.416105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.416328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.416359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.416528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.416557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.416751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.416781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.416974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.417004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.417204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.417243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.417438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.417468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.417657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.417690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.417863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.417894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.418156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.418196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.418398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.418428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.418607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.418639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.418762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.418795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.418968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.418997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.419117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.419147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.419335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.419365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.419635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.419668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.419797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.419829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.420048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.420078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.420216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.420254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.420372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.420404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.420538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.420569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.420767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.420799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.421025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.421055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.421257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.421287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.421553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.421582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.421784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.421815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.422003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.422033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.422221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.422258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.422381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.422412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.422539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.422568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.004 qpair failed and we were unable to recover it. 00:35:47.004 [2024-07-13 01:00:58.422752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.004 [2024-07-13 01:00:58.422782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.423035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.423064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.423274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.423304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.423419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.423449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.423694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.423723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.423842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.423873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.424063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.424093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.424284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.424314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.424436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.424465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.424639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.424668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.424850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.424880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.425013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.425043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.425152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.425181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.425307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.425338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.425594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.425623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.425818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.425848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.426020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.426050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.426286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.426316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.426434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.426469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.426642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.426672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.426872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.426901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.427177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.427207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.427425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.427456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.427743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.427773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.427953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.427983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.428172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.428202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.428434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.428463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.428636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.428665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.428768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.428797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.428968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.428997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.429120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.429150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.429269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.429300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.429434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.429464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.429636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.429665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.429942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.429971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.430140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.430170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.430361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.430391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.430519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.430549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.430679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.430708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.430963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.430992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.431163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.005 [2024-07-13 01:00:58.431192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.005 qpair failed and we were unable to recover it. 00:35:47.005 [2024-07-13 01:00:58.431391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.431422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.431586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.431616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.431731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.431760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.431879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.431909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.432084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.432115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.432404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.432434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.432695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.432725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.432908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.432938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.433130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.433160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.433333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.433364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.433570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.433599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.433808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.433837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.434074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.434104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.434292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.434322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.434512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.434542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.434715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.434744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.434936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.434965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.435171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.435206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.435403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.435433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.435609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.435639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.435824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.435853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.436037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.436066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.436267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.436298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.436470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.436500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.436630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.436660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.436860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.436889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.437061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.437091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.437281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.437312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.437519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.437548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.437758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.437788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.437920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.437949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.438138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.438168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.438359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.438389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.438524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.438553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.438822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.438852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.439035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.439064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.439250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.439280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.439398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.439428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.439614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.439643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.439816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.439846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.439965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.006 [2024-07-13 01:00:58.439995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.006 qpair failed and we were unable to recover it. 00:35:47.006 [2024-07-13 01:00:58.440127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.440156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.440268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.440299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.440536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.440566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.440746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.440776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.441038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.441067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.441201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.441240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.441427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.441455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.441694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.441723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.441906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.441936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.442055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.442084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.442220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.442260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.442449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.442479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.442671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.442700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.442942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.442972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.443234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.443265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.443453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.443483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.443724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.443759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.443882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.443911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.444154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.444184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.444430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.444460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.444646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.444675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.444913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.444943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.445207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.445261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.445430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.445459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.445642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.445671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.445784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.445813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.445987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.446016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.446275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.446306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.446445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.446475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.446664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.446693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.446941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.446971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.447142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.447171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.447411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.447441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.447695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.447725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.447908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.447938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.007 [2024-07-13 01:00:58.448186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.007 [2024-07-13 01:00:58.448215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.007 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.448354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.448383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.448566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.448595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.448788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.448817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.449027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.449056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.449304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.449335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.449456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.449486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.449674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.449703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.449902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.449932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.450172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.450201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.450447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.450477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.450739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.450768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.450961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.450990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.451161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.451191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.451375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.451405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.451544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.451573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.451694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.451723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.451912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.451942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.452202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.452241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.452529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.452558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.452743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.452773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.452966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.453001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.453261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.453292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.453415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.453445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.453567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.453596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.453714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.453744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.453879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.453908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.454191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.454220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.454347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.454377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.454574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.454603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.454774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.454803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.454935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.454964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.455135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.455164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.455337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.455368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.455552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.455581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.455772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.455801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.455996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.456024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.456242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.456273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.456394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.456423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.456605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.456634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.456814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.008 [2024-07-13 01:00:58.456843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.008 qpair failed and we were unable to recover it. 00:35:47.008 [2024-07-13 01:00:58.457051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.457081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.457347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.457377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.457551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.457581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.457819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.457847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.458016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.458045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.458285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.458315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.458529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.458558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.458698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.458732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.458916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.458945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.459135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.459164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.459430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.459460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.459629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.459658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.459833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.459862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.460044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.460073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.460199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.460235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.460374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.460403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.460576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.460605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.460814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.460843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.461032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.461060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.461182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.461211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.461401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.461430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.461648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.461677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.461848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.461877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.462070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.462099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.462285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.462316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.462581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.462611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.462881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.462911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.463154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.463182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.463370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.463401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.463590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.463619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.463806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.463835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.464047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.464077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.464264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.464294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.464485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.464515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.464693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.464722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.464893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.464922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.465188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.465218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.465367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.465397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.465517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.465547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.465750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.465779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.465922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.009 [2024-07-13 01:00:58.465951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.009 qpair failed and we were unable to recover it. 00:35:47.009 [2024-07-13 01:00:58.466200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.466239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.466362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.466391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.466572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.466601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.466725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.466755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.466943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.466972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.467163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.467192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.467396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.467432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.467692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.467721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.467951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.467980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.468154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.468183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.468427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.468457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.468713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.468742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.468937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.468967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.469164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.469193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.469342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.469373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.469561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.469590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.469716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.469746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.469877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.469905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.470106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.470134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.470337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.470366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.470544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.470573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.470743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.470772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.470958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.470987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.471169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.471199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.471398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.471427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.471543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.471572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.471754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.471783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.471970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.471999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.472106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.472135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.472328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.472357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.472476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.472505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.472766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.472795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.472977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.473006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.473309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.473339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.473466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.473495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.473692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.473721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.474014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.474043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.474236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.474266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.474403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.474432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.474622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.474651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.474862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.010 [2024-07-13 01:00:58.474891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.010 qpair failed and we were unable to recover it. 00:35:47.010 [2024-07-13 01:00:58.475082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.475111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.475349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.475379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.475588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.475617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.475799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.475829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.475939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.475968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.476083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.476118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.476236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.476266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.476452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.476481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.476719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.476747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.476870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.476899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.477070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.477099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.477292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.477322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.477453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.477482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.477693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.477722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.477897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.477926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.478131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.478160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.478334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.478365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.478652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.478681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.478814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.478843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.479021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.479051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.479158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.479187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.479301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.479332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.479448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.479478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.479713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.479742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.479918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.479946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.480056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.480085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.480374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.480404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.480522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.480551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.480727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.480755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.480956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.480985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.481161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.481190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.481303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.481333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.481469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.481499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.481687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.481717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.481886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.481915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.011 qpair failed and we were unable to recover it. 00:35:47.011 [2024-07-13 01:00:58.482043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-07-13 01:00:58.482072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.482258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.482288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.482472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.482502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.482673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.482702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.482948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.482977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.483182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.483211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.483352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.483383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.483574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.483602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.483788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.483817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.483921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.483950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.484079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.484113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.484372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.484402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.484592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.484621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.484791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.484820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.484939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.484968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.485159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.485187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.485341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.485372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.485615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.485644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.485814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.485843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.486080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.486110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.486247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.486277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.486472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.486501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.486674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.486703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.486881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.486910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.487179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.487208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.487348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.487378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.487499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.487528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.487640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.487669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.487856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.487886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.488121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.488150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.488418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.488449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.488631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.488660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.488903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.488932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.489122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.489151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.489328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.489358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.489491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.489520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.489648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.489677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.489820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.489849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.489969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.489999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.490104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.490132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.490328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-07-13 01:00:58.490359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.012 qpair failed and we were unable to recover it. 00:35:47.012 [2024-07-13 01:00:58.490476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.490506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.490720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.490748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.490929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.490959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.491092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.491122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.491295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.491326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.491448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.491477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.491644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.491673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.491850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.491879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.492073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.492102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.492285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.492321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.492521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.492550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.492731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.492760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.492998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.493027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.493216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.493254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.493405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.493435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.493606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.493635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.493817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.493846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.493965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.493996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.494190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.494220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.494367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.494397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.494661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.494690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.494924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.494953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.495125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.495154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.495328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.495359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.495625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.495654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.495844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.495873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.496007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.496036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.496173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.496203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.496424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.496454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.496568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.496598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.496778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.496808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.496936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.496966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.497076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.497105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.497277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.497308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.497546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.497576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.497755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.497785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.497910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.497939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.498132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.498162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.498418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.498449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.498558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.498587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.013 [2024-07-13 01:00:58.498728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-07-13 01:00:58.498758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.013 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.498971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.499001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.499117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.499148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.499332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.499362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.499491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.499522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.499704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.499736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.499981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.500015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.500216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.500253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.500352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.500379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.500503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.500538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.500654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.500683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.500937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.500966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.501168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.501197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.501456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.501487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.501603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.501632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.501861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.501891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.502018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.502047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.502294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.502325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.502509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.502539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.502657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.502687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.502797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.502826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.503052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.503082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.503205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.503241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.503505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.503535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.503775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.503804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.503918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.503947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.504166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.504195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.504334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.504365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.504473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.504503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.504674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.504704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.504946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.504976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.505097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.505126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.505304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.505335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.505458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.505488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.505678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.505707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.505819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.505848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.505973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.506002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.506117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.506145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.506337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.506367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.506639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.506669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.506782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.506811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.014 [2024-07-13 01:00:58.506985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.014 [2024-07-13 01:00:58.507014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.014 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.507185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.507215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.507350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.507380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.507582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.507612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.507732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.507763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.507900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.507930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.508033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.508063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.508201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.508239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.508503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.508538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.508675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.508705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.508834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.508863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.509081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.509110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.509279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.509309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.509502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.509531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.509720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.509749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.509854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.509883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.510058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.510087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.510208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.510250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.510384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.510413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.510523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.510551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.510734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.510761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.510958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.510988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.511300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.511331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.511446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.511476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.511616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.511646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.511831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.511860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.511993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.512023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.512240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.512271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.512454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.512482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.512604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.512632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.512742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.512770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.512955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.512985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.513105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.513134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.513315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.513345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.513529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.513559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.513692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.015 [2024-07-13 01:00:58.513723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.015 qpair failed and we were unable to recover it. 00:35:47.015 [2024-07-13 01:00:58.513918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-07-13 01:00:58.513947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-07-13 01:00:58.514136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-07-13 01:00:58.514166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-07-13 01:00:58.514351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.016 [2024-07-13 01:00:58.514380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.016 qpair failed and we were unable to recover it. 00:35:47.016 [2024-07-13 01:00:58.514563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.514592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.514793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.514823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.515012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.515042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.515246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.515276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.515403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.515433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.515614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.515644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.515757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.515786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.515913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.515943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.516133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.516162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.516424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.516460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.516639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.516668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.516906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.516935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.517052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.517082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.517303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.517334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.517578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.517608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.517867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.517896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.518033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.518062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.518254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.518285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.518529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.518558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.518817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.518847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.519035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.519065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.519183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.519212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.519391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.519420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.519551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.519581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.519765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.519794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.520046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.520075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.520215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.520254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.520364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.520394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.520524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.520553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.520732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.520762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.520893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.520923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.521114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.521143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.521329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.521360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.521480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.521509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.521638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.521668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.521848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.521878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.522006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.522036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.522213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.522253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.522455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.522484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.297 qpair failed and we were unable to recover it. 00:35:47.297 [2024-07-13 01:00:58.522606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.297 [2024-07-13 01:00:58.522636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.522809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.522839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.522963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.522991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.523259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.523292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.523417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.523446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.523558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.523588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.523714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.523743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.523983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.524012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.524204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.524245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.524464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.524493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.524685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.524720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.524909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.524939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.525074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.525103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.525243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.525276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.525519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.525548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.525676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.525706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.525889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.525918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.526032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.526061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.526176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.526205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.526388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.526418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.526554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.526584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.526694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.526724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.526829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.526858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.526977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.527008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.527117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.527146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.527319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.527350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.527487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.527516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.527750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.527779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.527884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.527914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.528156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.528185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.528305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.528337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.528525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.528554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.528736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.528765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.528937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.528966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.529093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.529123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.529308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.529339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.529471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.529501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.529652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.529721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.529938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.530006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.530270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.530306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.530435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.298 [2024-07-13 01:00:58.530466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.298 qpair failed and we were unable to recover it. 00:35:47.298 [2024-07-13 01:00:58.530652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.530681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.530820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.530849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.530974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.531003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.531124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.531154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.531334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.531365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.531535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.531564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.531692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.531721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.531908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.531937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.532109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.532139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.532255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.532294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.532402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.532432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.532549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.532579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.532771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.532802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.532932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.532962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.533073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.533103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.533208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.533247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.533362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.533391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.533510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.533540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.533658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.533687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.533817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.533846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.534022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.534052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.534180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.534209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.534401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.534431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.534621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.534651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.534759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.534788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.534904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.534934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.535057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.535086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.535211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.535251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.535376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.535406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.535517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.535547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.535723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.535753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.535875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.535905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.536017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.536046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.536164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.536193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.536384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.536414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.536608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.536637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.536827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.536861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.537052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.537082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.537202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.537239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.537352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.537381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.537553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.299 [2024-07-13 01:00:58.537582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.299 qpair failed and we were unable to recover it. 00:35:47.299 [2024-07-13 01:00:58.537691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.537721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.537838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.537867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.537996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.538026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.538135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.538164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.538278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.538308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.538433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.538463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.538663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.538693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.538880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.538910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.539187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.539222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.539472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.539502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.539608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.539638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.539748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.539778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.540031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.540059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.540273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.540303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.540569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.540598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.540720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.540749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.540955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.540985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.541112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.541142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.541265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.541295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.541409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.541439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.541559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.541589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.541844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.541873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.541997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.542027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.542215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.542256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.542433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.542465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.542595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.542624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.542810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.542839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.542959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.542988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.543243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.543276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.543417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.543447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.543560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.543590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.543698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.543729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.543854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.543884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.544100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.544130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.544316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.544346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.544535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.544569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.544696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.544726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.544839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.544869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.544999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.545028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.545205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.545245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.300 [2024-07-13 01:00:58.545369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.300 [2024-07-13 01:00:58.545399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.300 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.545585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.545615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.545718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.545747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.545944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.545973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.546099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.546128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.546246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.546276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.546382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.546413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.546526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.546555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.546695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.546730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.546902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.546931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.547121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.547150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.547330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.547360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.547489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.547518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.547632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.547661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.547769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.547798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.547925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.547955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.548073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.548103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.548219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.548258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.548386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.548415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.548595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.548624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.548822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.548852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.549023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.549052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.549170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.549200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.549337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.549370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.549506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.549535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.549774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.549803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.550046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.550075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.550249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.550279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.550465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.550495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.550619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.550648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.550863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.550898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.551107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.551137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.551331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.551362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.551577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.551606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.301 qpair failed and we were unable to recover it. 00:35:47.301 [2024-07-13 01:00:58.551786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.301 [2024-07-13 01:00:58.551815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.552059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.552092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.552210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.552248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.552426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.552456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.552577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.552609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.552864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.552894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.553075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.553104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.553294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.553324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.553470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.553500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.553680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.553711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.553840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.553869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.553989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.554019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.554200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.554243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.554422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.554451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.554586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.554615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.554794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.554824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.555011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.555041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.555243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.555274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.555382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.555413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.555519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.555549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.555815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.555845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.556043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.556073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.556265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.556297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.556420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.556450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.556647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.556676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.556860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.556890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.557007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.557037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.557293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.557324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.557531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.557561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.557734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.557763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.557937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.557967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.558085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.558115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.558376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.558406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.558608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.558638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.558874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.558904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.559186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.559216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.559398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.559428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.559666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.559696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.559893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.559923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.560081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.560111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.560343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.302 [2024-07-13 01:00:58.560374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.302 qpair failed and we were unable to recover it. 00:35:47.302 [2024-07-13 01:00:58.560592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.560627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.560812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.560841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.561029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.561059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.561261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.561292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.561439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.561474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.561697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.561764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.561977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.562011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.562196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.562237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.562453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.562483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.562746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.562775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.563015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.563045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.563240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.563271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.563412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.563442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.563572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.563603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.563866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.563897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.564139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.564169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.564291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.564321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.564532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.564562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.564768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.564798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.565058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.565087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.565200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.565242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.565419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.565448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.565637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.565667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.565783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.565813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.566011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.566041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.566282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.566312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.566587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.566617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.566758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.566794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.566922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.566952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.567243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.567273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.567514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.567544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.567748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.567778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.567974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.568004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.568279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.568309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.568491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.568520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.568714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.568744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.568931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.568961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.569155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.569185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.569396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.569426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.569687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.569717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.569875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.569904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.303 [2024-07-13 01:00:58.570083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.303 [2024-07-13 01:00:58.570112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.303 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.570357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.570388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.570535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.570565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.570680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.570709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.570831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.570860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.571075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.571104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.571323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.571353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.571494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.571524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.571637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.571666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.571863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.571893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.572145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.572174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.572370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.572400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.572640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.572670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.572853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.572887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.572986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.573015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.573211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.573252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.573401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.573430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.573729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.573759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.574024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.574053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.574376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.574407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.574587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.574617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.574903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.574932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.575245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.575275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.575484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.575513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.575656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.575686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.575815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.575844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.576026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.576056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.576242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.576273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.576506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.576536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.576739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.576769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.577055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.577084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.577219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.577255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.577447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.577476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.577718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.577748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.577879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.577909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.578046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.304 [2024-07-13 01:00:58.578075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.304 qpair failed and we were unable to recover it. 00:35:47.304 [2024-07-13 01:00:58.578341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.578372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.578498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.578527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.578696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.578725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.579000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.579030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.579268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.579303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.579511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.579540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.579705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.579734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.579962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.579991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.580178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.580208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.580448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.580479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.580654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.580684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.580889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.580919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.581044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.581073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.581313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.581344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.581481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.581511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.581714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.581744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.581907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.581936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.582111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.582140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.582313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.582344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.582538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.582567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.582805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.582834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.583017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.583047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.583299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.583329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.583500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.583530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.583719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.583749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.583985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.584014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.584276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.584306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.584440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.584470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.584709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.584738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.584949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.584978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.585166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.585195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.585392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.585423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.585615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.585644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.585814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.585843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.586101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.586131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.305 [2024-07-13 01:00:58.586371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.305 [2024-07-13 01:00:58.586401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.305 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.586637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.586666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.586837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.586866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.587081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.587110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.587331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.587361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.587499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.587529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.587768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.587796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.587935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.587964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.588204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.588250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.588383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.588413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.588693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.588727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.588967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.588996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.589168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.589197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.589443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.589473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.589657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.589686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.589869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.589898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.590162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.590192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.590386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.590416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.590669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.590698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.590891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.590920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.591055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.591084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.591203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.591243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.591482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.591511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.591751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.591780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.592132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.592161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.592355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.592386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.592603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.592632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.592817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.592847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.592969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.592998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.593282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.593313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.593460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.593489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.593683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.593713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.593911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.593941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.594158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.594187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.594392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.594422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.594561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.594591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.594717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.594746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.595021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.595056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.595302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.595332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.595468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.595498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.595753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.595782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.306 [2024-07-13 01:00:58.595951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-13 01:00:58.595980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.306 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.596220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.596259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.596446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.596476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.596684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.596713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.597037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.597067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.597315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.597345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.597484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.597513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.597706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.597736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.598000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.598030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.598240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.598271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.598487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.598516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.598779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.598808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.599060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.599089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.599357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.599387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.599626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.599656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.599922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.599951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.600161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.600191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.600398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.600428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.600620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.600649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.600853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.600882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.601116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.601145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.601379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.601412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.601547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.601576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.601767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.601801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.601987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.602016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.602119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.602148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.602369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.602399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.602523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.602554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.602799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.602829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.603111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.603140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.603360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.603390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.603579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.603608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.603736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.603766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.604015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.604044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.604181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.604211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.604459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.604488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.604728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.604756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.604883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.604913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.605105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.605134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.605339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.605370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.605545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.605575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.605839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-13 01:00:58.605868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.307 qpair failed and we were unable to recover it. 00:35:47.307 [2024-07-13 01:00:58.606138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.606167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.606378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.606408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.606651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.606680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.606935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.606964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.607201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.607236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.607433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.607463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.607621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.607651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.607930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.607959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.608147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.608177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.608394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.608425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.608665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.608694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.608839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.608868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.609073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.609102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.609213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.609269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.609448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.609477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.609717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.609746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.610073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.610103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.610397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.610428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.610567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.610596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.610732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.610761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.611027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.611057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.611328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.611358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.611556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.611586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.611712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.611741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.611982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.612012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.612197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.612233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.612425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.612455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.612695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.612725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.613050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.613079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.613255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.613284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.613457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.613487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.613674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.613704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.613944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.613973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.614195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.614234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.614372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.614401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.614598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.614628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.614950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.614980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.615272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.615304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.615552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.615582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.615773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.615803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.615958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.615987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.308 qpair failed and we were unable to recover it. 00:35:47.308 [2024-07-13 01:00:58.616220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-13 01:00:58.616258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.616462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.616492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.616610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.616639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.616896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.616925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.617178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.617208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.617358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.617388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.617652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.617682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.617988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.618017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.618282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.618317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.618529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.618558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.618750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.618780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.618972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.619002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.619207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.619246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.619514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.619544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.619812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.619841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.620030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.620060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.620300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.620331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.620469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.620499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.620712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.620741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.620954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.620983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.621174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.621203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.621405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.621435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.621628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.621657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.621793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.621822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.622029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.622058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.622278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.622308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.622504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.622533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.622775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.622804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.623069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.623099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.623353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.623384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.623528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.623558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.623759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.623789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.624037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.624067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.624331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.624362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.624581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.624611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.624855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.624889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.625135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.625165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.625489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.309 [2024-07-13 01:00:58.625518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.309 qpair failed and we were unable to recover it. 00:35:47.309 [2024-07-13 01:00:58.625711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.625740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.625949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.625978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.626160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.626189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.626413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.626443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.626570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.626598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.626785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.626815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.627015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.627045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.627285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.627315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.627507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.627536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.627739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.627769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.628043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.628072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.628311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.628342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.628488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.628517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.628758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.628788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.628920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.628949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.629199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.629235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.629500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.629530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.629726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.629755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.630047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.630076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.630369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.630399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.630530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.630559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.630735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.630764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.630887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.630916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.631133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.631162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.631434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.631475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.631679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.631709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.631937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.631966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.632267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.632298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.632485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.632514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.632766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.632796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.632982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.633011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.633273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.633304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.633448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.633478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.633745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.633774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.633964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.633994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.634242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.634272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.634460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.634489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.634675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.634705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.634837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.310 [2024-07-13 01:00:58.634866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.310 qpair failed and we were unable to recover it. 00:35:47.310 [2024-07-13 01:00:58.634993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.635023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.635295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.635326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.635503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.635532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.635772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.635802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.636042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.636071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.636390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.636421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.636608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.636638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.636781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.636810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.636989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.637019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.637266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.637297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.637476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.637506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.637704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.637733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.638056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.638085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.638345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.638376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.638644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.638673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.638800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.638830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.639022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.639052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.639343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.639375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.639620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.639650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.639919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.639948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.640242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.640272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.640423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.640452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.640670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.640699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.640922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.640952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.641144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.641174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.641332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.641362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.641605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.641639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.641891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.641920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.642098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.642128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.642397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.642428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.642621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.642650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.642842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.642873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.643093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.643122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.643233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.643264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.643411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.643441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.643644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.643673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.643931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.643960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.644136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.644165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.644424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.644455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.644647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.644676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.644968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.311 [2024-07-13 01:00:58.644998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.311 qpair failed and we were unable to recover it. 00:35:47.311 [2024-07-13 01:00:58.645295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.645327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.645601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.645630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.645786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.645815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.646067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.646096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.646308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.646339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.646535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.646564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.646809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.646838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.647037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.647067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.647365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.647396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.647639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.647669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.647965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.647995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.648278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.648309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.648527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.648562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.648742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.648771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.648947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.648977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.649223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.649261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.649450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.649480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.649791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.649821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.650084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.650113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.650387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.650418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.650616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.650645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.650798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.650828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.651090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.651119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.651296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.651327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.651506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.651535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.651738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.651768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.652052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.652082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.652358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.652388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.652583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.652612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.652754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.652783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.652996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.653026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.653272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.653303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.653498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.653528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.653672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.653701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.653901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.653931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.654201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.654241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.654390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.654419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.654553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.654583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.654780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.654810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.654986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.655021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.655274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.312 [2024-07-13 01:00:58.655304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.312 qpair failed and we were unable to recover it. 00:35:47.312 [2024-07-13 01:00:58.655446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.655475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.655626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.655656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.655779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.655808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.655946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.655974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.656095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.656124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.656322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.656353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.656600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.656630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.656894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.656923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.657190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.657219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.657423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.657453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.657583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.657611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.657803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.657832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.658028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.658058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.658372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.658402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.658592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.658621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.658808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.658837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.659126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.659156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.659358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.659389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.659541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.659571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.659721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.659750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.660035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.660065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.660285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.660316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.660513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.660542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.660789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.660818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.660955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.660985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.661279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.661309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.661583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.661613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.661793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.661823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.662123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.662152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.662300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.662330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.662484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.662513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.662649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.662679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.662811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.662840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.663018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.663048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.663155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.313 [2024-07-13 01:00:58.663185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.313 qpair failed and we were unable to recover it. 00:35:47.313 [2024-07-13 01:00:58.663400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.663430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.663628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.663657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.663853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.663883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.664101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.664130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.664269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.664302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.664552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.664582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.664777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.664807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.665083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.665114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.665312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.665343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.665541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.665570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.665746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.665775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.666056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.666086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.666201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.666237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.666456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.666485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.666748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.666778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.666894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.666924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.667168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.667197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.667352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.667382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.667539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.667569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.667768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.667798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.668006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.668035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.668308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.668339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.668548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.668578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.668811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.668841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.314 [2024-07-13 01:00:58.669040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.314 [2024-07-13 01:00:58.669070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.314 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.669257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.669289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.669536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.669566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.669689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.669718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.669853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.669883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.670180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.670210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.670430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.670461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.670654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.670689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.670888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.670917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.671164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.671195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.671436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.671466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.671666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.671695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.671972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.672002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.672274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.672305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.672580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.672609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.672896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.672926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.673182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.673212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.673367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.673398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.673578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.673607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.673808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.673837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.674108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.674137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.674337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.674367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.674559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.674588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.674835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.674864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.675072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.675102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.675384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.675414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.675550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.675580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.675715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.675745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.675973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.676002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.676207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.676249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.676386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.676415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.676618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.676647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.676980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.315 [2024-07-13 01:00:58.677010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.315 qpair failed and we were unable to recover it. 00:35:47.315 [2024-07-13 01:00:58.677260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.677291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.677490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.677526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.677707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.677736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.678029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.678059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.678349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.678380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.678549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.678578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.678720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.678750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.678893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.678922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.679146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.679175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.679399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.679429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.679679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.679709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.679990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.680020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.680248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.680280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.680415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.680445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.680599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.680629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.680778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.680808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.680932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.680961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.681145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.681175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.681376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.681407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.681609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.681640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.681892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.681922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.682219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.682260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.682407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.682436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.682875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.682912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.683248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.683284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.683493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.683524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.683772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.683802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.684006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.684036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.684310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.684349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.684574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.684604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.684931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.684961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.685277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.685308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.685426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.685456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.685659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.685688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.685985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.686015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.686274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.686305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.686579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.686609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.686935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.686965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.687190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.687220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.687435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.687465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.687667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.687697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.687955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.687986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.688340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.688372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.688592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.688640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.688917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.688947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.689252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.689284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.689501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.689531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.689659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.689689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.689881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.689911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.690165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.690196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.690467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.690499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.690623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.690653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.690771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.690801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.316 [2024-07-13 01:00:58.691051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.316 [2024-07-13 01:00:58.691081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.316 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.691232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.691263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.691468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.691498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.691713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.691743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.691938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.691968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.692181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.692211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.692428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.692459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.692613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.692643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.692796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.692826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.693018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.693048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.693247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.693278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.693482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.693513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.693721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.693751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.694049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.694080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.694354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.694385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.694523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.694553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.694818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.694854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.695054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.695083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.695281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.695312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.695517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.695548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.695753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.695782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.696087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.696118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.696315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.696346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.696574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.696603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.696753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.696783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.697018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.697047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.697334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.697365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.697573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.697602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.697810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.697840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.697968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.697998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.698153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.698183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.698407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.698438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.698583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.698612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.698741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.698770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.699059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.699088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.699286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.699317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.699520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.699549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.699742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.699771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.699998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.700028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.700291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.700321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.700528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.700558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.700816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.700846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.701030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.701059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.701262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.701298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.701454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.701483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.701673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.701702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.702071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.702101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.702306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.702337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.702548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.702578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.702730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.702760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.702975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.703005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.703198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.703240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.703478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.703506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.703729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.703758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.703965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.703994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.704296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.704327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.704528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.704557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.317 [2024-07-13 01:00:58.704717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.317 [2024-07-13 01:00:58.704747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.317 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.705024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.705055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.705248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.705279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.705480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.705509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.705735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.705765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.706022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.706051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.706318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.706349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.706544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.706574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.706716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.706745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.707023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.707054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.707258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.707289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.707509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.707538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.707820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.707849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.708098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.708133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.708323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.708354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.708558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.708588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.708790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.708820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.708929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.708957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.709161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.709191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.709560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.709591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.709788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.709817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.710031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.710060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.710361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.710392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.710671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.710701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.711000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.711030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.711244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.711275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.711557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.711586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.711953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.711983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.712246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.712276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.712576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.712606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.712824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.712854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.713075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.713105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.713339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.713370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.713625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.713655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.713910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.713940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.318 [2024-07-13 01:00:58.714137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.318 [2024-07-13 01:00:58.714168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.318 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.714452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.714483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.714668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.714697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.714930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.714961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.715279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.715310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.715512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.715542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.715701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.715731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.715979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.716008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.716279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.716310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.716516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.716546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.716800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.716829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.717055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.717085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.717364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.717395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.717579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.717609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.717813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.717843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.718119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.718149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.718376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.718407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.718593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.718623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.718900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.718930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.719200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.719238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.719477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.719507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.719711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.719741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.719962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.719992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.720295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.720327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.720478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.720508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.720631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.720660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.720815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.720844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.721100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.721130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.721428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.721460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.721676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.721705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.721994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.722024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.722295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.722326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.722530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.722560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.722792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.722822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.723100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.723130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.723276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.723307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.723515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.723544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.723682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.723712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.723981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.724010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.724282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.724313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.724569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.724599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.724766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.724795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.725019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.725048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.725271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.725304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.725515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.725545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.725849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.725878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.726154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.726189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.726488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.726519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.726720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.726750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.726950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.726980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.727116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.727145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.727339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.727370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.727647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.727676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.727814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.727844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.319 [2024-07-13 01:00:58.728127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.319 [2024-07-13 01:00:58.728157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.319 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.728416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.728447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.728655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.728685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.728890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.728919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.729139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.729168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.729337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.729367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.729607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.729637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.729793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.729823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.730132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.730163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.730372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.730404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.730539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.730568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.730710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.730739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.731080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.731109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.731267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.731297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.731445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.731474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.731623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.731652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.731793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.731823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.732117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.732148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.732412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.732443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.732567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.732607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.732809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.732839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.733055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.733085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.733342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.733372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.733527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.733557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.733746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.733776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.734031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.734062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.734261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.734292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.734447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.734476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.734609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.734638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.734841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.734871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.735149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.735179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.735402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.735432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.735636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.735666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.735873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.735903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.736177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.736206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.736471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.736503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.736689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.736719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.736975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.737004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.737330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.737361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.737606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.737636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.737832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.737861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.738054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.738083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.738353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.738384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.738531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.738561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.320 qpair failed and we were unable to recover it. 00:35:47.320 [2024-07-13 01:00:58.738815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.320 [2024-07-13 01:00:58.738845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.739123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.739153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.739389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.739419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.739700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.739729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.740016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.740046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.740266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.740296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.740437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.740466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.740622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.740652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.740801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.740831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.741111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.741140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.741397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.741429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.741697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.741726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.741957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.741986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.742172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.742202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.742432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.742463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.742651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.742681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.742870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.742900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.743163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.743193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.743346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.743377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.743583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.743612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.743902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.743932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.744135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.744165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.744393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.744424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.744552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.744582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.744793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.744823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.745047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.745077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.745306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.745337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.745486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.745516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.321 [2024-07-13 01:00:58.745667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.321 [2024-07-13 01:00:58.745697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.321 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.745994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.746024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.746217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.746255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.746392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.746422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.746635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.746665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.746945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.746975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.747273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.747304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.747470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.747502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.747658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.747688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.747899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.747929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.748081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.748111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.748310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.748341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.748494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.748524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.748678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.748708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.748930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.748959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.749106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.749141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.749417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.749450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.749652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.749682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.749808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.749838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.750055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.750084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.750286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.750317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.750432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.750462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.750745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.750775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.751050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.322 [2024-07-13 01:00:58.751080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.322 qpair failed and we were unable to recover it. 00:35:47.322 [2024-07-13 01:00:58.751410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.323 [2024-07-13 01:00:58.751442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.323 qpair failed and we were unable to recover it. 00:35:47.323 [2024-07-13 01:00:58.751566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.323 [2024-07-13 01:00:58.751595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.323 qpair failed and we were unable to recover it. 00:35:47.323 [2024-07-13 01:00:58.751738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.323 [2024-07-13 01:00:58.751767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.323 qpair failed and we were unable to recover it. 00:35:47.323 [2024-07-13 01:00:58.751951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.323 [2024-07-13 01:00:58.751981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.323 qpair failed and we were unable to recover it. 00:35:47.323 [2024-07-13 01:00:58.752243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.323 [2024-07-13 01:00:58.752273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.323 qpair failed and we were unable to recover it. 00:35:47.323 [2024-07-13 01:00:58.752453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.323 [2024-07-13 01:00:58.752483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.323 qpair failed and we were unable to recover it. 00:35:47.323 [2024-07-13 01:00:58.752639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.323 [2024-07-13 01:00:58.752668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.323 qpair failed and we were unable to recover it. 00:35:47.323 [2024-07-13 01:00:58.752829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.323 [2024-07-13 01:00:58.752859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.323 qpair failed and we were unable to recover it. 00:35:47.323 [2024-07-13 01:00:58.753063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.323 [2024-07-13 01:00:58.753092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.323 qpair failed and we were unable to recover it. 00:35:47.323 [2024-07-13 01:00:58.753244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.323 [2024-07-13 01:00:58.753275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.323 qpair failed and we were unable to recover it. 00:35:47.323 [2024-07-13 01:00:58.753432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.323 [2024-07-13 01:00:58.753462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.323 qpair failed and we were unable to recover it. 00:35:47.323 [2024-07-13 01:00:58.753741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.323 [2024-07-13 01:00:58.753772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.323 qpair failed and we were unable to recover it. 00:35:47.323 [2024-07-13 01:00:58.753956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.323 [2024-07-13 01:00:58.753986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.323 qpair failed and we were unable to recover it. 00:35:47.323 [2024-07-13 01:00:58.754262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.323 [2024-07-13 01:00:58.754293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.323 qpair failed and we were unable to recover it. 00:35:47.323 [2024-07-13 01:00:58.754433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.323 [2024-07-13 01:00:58.754463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.754660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.754691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.754894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.754923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.755068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.755098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.755288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.755324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.755575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.755605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.755736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.755767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.756075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.756104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.756245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.756276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.756408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.756437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.756716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.756748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.756991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.757021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.757320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.757351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.757509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.757539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.757739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.757771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.758082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.758114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.758369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.758400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.758596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.758626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.758910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.758941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.759161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.759191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.759405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.759436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.759580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.759610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.759832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.759863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.760088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.760122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.760328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.760358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.760566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.760596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.760787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.760821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.760978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.761015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.761326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.324 [2024-07-13 01:00:58.761357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.324 qpair failed and we were unable to recover it. 00:35:47.324 [2024-07-13 01:00:58.761632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.761671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.761937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.761980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.762281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.762320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.762530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.762560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.762820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.762853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.763123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.763159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.763380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.763412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.763547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.763577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.763712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.763742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.763951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.763981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.764298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.764329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.764485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.764516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.764669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.764699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.764980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.765010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.765214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.765275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.765470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.765501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.765699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.765729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.765934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.765963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.766241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.766273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.766416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.766446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.766642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.766672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.766875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.766904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.767174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.767204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.767416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.767447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.767767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.767797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.768113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.768143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.768445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.325 [2024-07-13 01:00:58.768476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.325 qpair failed and we were unable to recover it. 00:35:47.325 [2024-07-13 01:00:58.768677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.768707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.768899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.768929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.769139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.769168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.769416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.769448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.769704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.769733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.769938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.769968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.770129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.770159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.770377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.770408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.770667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.770697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.770968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.770999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.771326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.771357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.771556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.771587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.771711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.771741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.771944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.771973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.772156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.772186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.772310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.772340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.772538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.772568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.772812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.772842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.773061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.773090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.773360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.773393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.773600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.773629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.773829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.773859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.774167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.774197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.774425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.774456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.326 [2024-07-13 01:00:58.774654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.326 [2024-07-13 01:00:58.774683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.326 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.774936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.774966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.775220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.775259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.775557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.775587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.775799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.775828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.776026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.776055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.776286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.776316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.776519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.776549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.776771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.776801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.776999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.777029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.777237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.777268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.777489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.777519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.777670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.777699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.777996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.778026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.778281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.778313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.778469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.778499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.778704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.778734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.778952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.778982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.779105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.779134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.779370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.779407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.779535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.779565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.779713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.779742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.779942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.779971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.780163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.780192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.780471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.780502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.780714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.780743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.780934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.780963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.327 qpair failed and we were unable to recover it. 00:35:47.327 [2024-07-13 01:00:58.781130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.327 [2024-07-13 01:00:58.781160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.781439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.781470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.781745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.781775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.782066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.782096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.782291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.782323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.782527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.782556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.782869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.782899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.783150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.783180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.783414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.783445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.783743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.783774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.784100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.784129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.784453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.784487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.784780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.784811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.785089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.785118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.785301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.785332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.785558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.785588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.785792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.785823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.786017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.786047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.786272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.786305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.786511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.786548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.786742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.786772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.786998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.787028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.787282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.787313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.787528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.787558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.787689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.787719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.328 qpair failed and we were unable to recover it. 00:35:47.328 [2024-07-13 01:00:58.787997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.328 [2024-07-13 01:00:58.788026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.788164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.788193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.788434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.788465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.788604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.788634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.788926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.788955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.789256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.789287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.789493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.789522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.789775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.789804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.790019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.790050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.790318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.790349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.790491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.790522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.790706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.790736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.790939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.790968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.791168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.791200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.791426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.791457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.791733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.791763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.791991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.792020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.792164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.792192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.792445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.792476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.792660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.792689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.792890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.792922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.793129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.793159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.793317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.793348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.793490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.793519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.793778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.329 [2024-07-13 01:00:58.793808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.329 qpair failed and we were unable to recover it. 00:35:47.329 [2024-07-13 01:00:58.794066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.330 [2024-07-13 01:00:58.794096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.330 qpair failed and we were unable to recover it. 00:35:47.330 [2024-07-13 01:00:58.794365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.330 [2024-07-13 01:00:58.794398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.330 qpair failed and we were unable to recover it. 00:35:47.330 [2024-07-13 01:00:58.794555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.330 [2024-07-13 01:00:58.794586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.330 qpair failed and we were unable to recover it. 00:35:47.330 [2024-07-13 01:00:58.794767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.330 [2024-07-13 01:00:58.794797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.330 qpair failed and we were unable to recover it. 00:35:47.330 [2024-07-13 01:00:58.794997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.330 [2024-07-13 01:00:58.795026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.330 qpair failed and we were unable to recover it. 00:35:47.330 [2024-07-13 01:00:58.795248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.330 [2024-07-13 01:00:58.795280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.330 qpair failed and we were unable to recover it. 00:35:47.330 [2024-07-13 01:00:58.795553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.330 [2024-07-13 01:00:58.795583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.330 qpair failed and we were unable to recover it. 00:35:47.330 [2024-07-13 01:00:58.795792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.330 [2024-07-13 01:00:58.795827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.330 qpair failed and we were unable to recover it. 00:35:47.330 [2024-07-13 01:00:58.796018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.330 [2024-07-13 01:00:58.796049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.330 qpair failed and we were unable to recover it. 00:35:47.330 [2024-07-13 01:00:58.796328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.330 [2024-07-13 01:00:58.796358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.330 qpair failed and we were unable to recover it. 00:35:47.330 [2024-07-13 01:00:58.796522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.330 [2024-07-13 01:00:58.796552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.330 qpair failed and we were unable to recover it. 00:35:47.330 [2024-07-13 01:00:58.796697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.330 [2024-07-13 01:00:58.796726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.330 qpair failed and we were unable to recover it. 00:35:47.330 [2024-07-13 01:00:58.796854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.330 [2024-07-13 01:00:58.796884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.330 qpair failed and we were unable to recover it. 00:35:47.330 [2024-07-13 01:00:58.797099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.797130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.797351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.797381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.797658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.797688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.798032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.798063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.798295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.798326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.798581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.798611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.798744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.798774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.799108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.799138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.799404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.799434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.799699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.799729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.800046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.800075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.800365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.800396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.800543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.800572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.800869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.800900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.801101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.801132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.801368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.801399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.801590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.801620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.801947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.801976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.802165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.802194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.802351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.802383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.802587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.802617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.802824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.802854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.803074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.803105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.803253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.803283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.803425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.803461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.803650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.803679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.331 [2024-07-13 01:00:58.804008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.331 [2024-07-13 01:00:58.804038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.331 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.804290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.804321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.804515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.804544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.804760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.804790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.805012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.805042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.805240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.805271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.805588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.805618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.805878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.805908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.806113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.806142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.806293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.806323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.806574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.806604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.806814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.806844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.807122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.807152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.807435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.807466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.807655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.807685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.807952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.807981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.808211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.808253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.808504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.808534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.808721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.808750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.808984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.809013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.809285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.809315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.809464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.809494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.809721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.809751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.809954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.809983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.810168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.810198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.810382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.810419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.810575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.810604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.810810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.810841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.332 qpair failed and we were unable to recover it. 00:35:47.332 [2024-07-13 01:00:58.810967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.332 [2024-07-13 01:00:58.810997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.811269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.811301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.811554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.811584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.811769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.811799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.812070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.812101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.812399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.812430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.812651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.812682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.812947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.812978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.813205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.813245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.813462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.813492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.813747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.813777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.813928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.813958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.814161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.814191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.814413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.814443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.814697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.814726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.814878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.814908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.815040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.815070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.815297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.815328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.815486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.815516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.815704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.815733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.815936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.815966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.816247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.816279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.816426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.816455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.816661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.816692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.816897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.333 [2024-07-13 01:00:58.816932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.333 qpair failed and we were unable to recover it. 00:35:47.333 [2024-07-13 01:00:58.817161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.817191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.817431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.817462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.817692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.817722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.817939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.817968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.818120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.818151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.818508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.818540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.818678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.818708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.818865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.818894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.819097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.819126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.819324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.819356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.819541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.819571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.819827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.819856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.820047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.820077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.820294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.820325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.820529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.820560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.820768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.820797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.821077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.821107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.821292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.821323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.821530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.821560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.821781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.821810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.822008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.822038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.822235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.822266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.822448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.822478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.822635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.822667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.334 [2024-07-13 01:00:58.822815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.334 [2024-07-13 01:00:58.822845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.334 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.823070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.823099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.823299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.823332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.823465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.823495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.823752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.823782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.823968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.823998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.824200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.824238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.824385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.824415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.824645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.824674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.824809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.824839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.825110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.825140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.825333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.825363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.825512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.825542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.825736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.825766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.825970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.826000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.826192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.826221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.826440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.826471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.826608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.826638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.826825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.826855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.827085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.827116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.827357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.827387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.827671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.827702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.827863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.827893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.828043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.828072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.828356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.828387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.335 [2024-07-13 01:00:58.828595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.335 [2024-07-13 01:00:58.828625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.335 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.828830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.828860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.829117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.829146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.829358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.829389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.829526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.829554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.829694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.829723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.829940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.829969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.830195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.830232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.830442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.830472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.830661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.830691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.830934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.830963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.831272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.831305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.831461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.831492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.831638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.831667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.831794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.831824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.832104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.832133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.832316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.832348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.832562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.832591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.832796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.832836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.832956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.832986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.833267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.833299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.336 qpair failed and we were unable to recover it. 00:35:47.336 [2024-07-13 01:00:58.833589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.336 [2024-07-13 01:00:58.833619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.833767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.833797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.834075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.834105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.834351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.834381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.834599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.834628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.834844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.834877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.835088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.835118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.835257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.835289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.835523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.835553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.835783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.835813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.835999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.836030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.836236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.836267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.836404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.836434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.836666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.836696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.836890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.836924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.837180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.837210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.837504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.837534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.837687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.837717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.837990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.838019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.838275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.838307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.838435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.838465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.838679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.838709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.838840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.838869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.839158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.839188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.839477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.839518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.839739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.839769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.840007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.840038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.840236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.840268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.840435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.840466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.840697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.840728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.840944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.840976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.841238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.841271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.841481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.841512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.841698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.841738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.841985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.842016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.842222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.842263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.842450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.842482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.842695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.842725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.628 [2024-07-13 01:00:58.842928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.628 [2024-07-13 01:00:58.842958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.628 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.843249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.843281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.843436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.843467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.843723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.843752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.843997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.844027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.844177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.844207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.844441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.844472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.844623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.844653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.844951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.844981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.845263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.845294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.845580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.845611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.845771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.845800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.846055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.846085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.846298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.846329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.846502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.846531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.846734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.846765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.846975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.847005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.847274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.847304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.847460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.847492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.847756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.847786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.847989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.848018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.848299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.848331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.848590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.848620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.848888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.848920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.849133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.849164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.849381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.849412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.849667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.849698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.849900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.849930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.850135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.850165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.850463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.850495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.850702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.850736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.850965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.851007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.851298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.851330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.851565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.851595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.851787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.851817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.852031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.852060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.852269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.852302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.852527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.852557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.852696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.852726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.852997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.853027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.853237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.629 [2024-07-13 01:00:58.853269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.629 qpair failed and we were unable to recover it. 00:35:47.629 [2024-07-13 01:00:58.853483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.853515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.853722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.853752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.853868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.853898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.854185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.854216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.854396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.854427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.854727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.854757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.854984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.855015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.855235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.855267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.855473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.855503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.855765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.855794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.856055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.856085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.856280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.856313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.856458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.856489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.856638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.856674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.856883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.856914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.857118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.857148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.857423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.857453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.857661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.857692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.857969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.857999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.858197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.858235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.858419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.858449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.858598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.858628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.858774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.858804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.858937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.858966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.859242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.859273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.859458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.859488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.859617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.859646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.859896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.859926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.860113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.860142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.860484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.860515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.860801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.860830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.861074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.861103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.861336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.861367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.861492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.861522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.861672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.861703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.861920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.861950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.862082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.862112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.862372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.862403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.862601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.862631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.862778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.862808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.630 qpair failed and we were unable to recover it. 00:35:47.630 [2024-07-13 01:00:58.863009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.630 [2024-07-13 01:00:58.863044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.863248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.863278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.863533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.863562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.863790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.863820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.864005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.864034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.864294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.864326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.864525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.864555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.864828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.864859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.864998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.865027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.865310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.865341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.865557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.865587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.865791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.865820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.866095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.866125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.866426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.866457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.866620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.866650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.866855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.866884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.867173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.867203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.867416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.867447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.867634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.867663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.867946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.867975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.868272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.868305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.868445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.868475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.868756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.868786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.869099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.869129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.869378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.869409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.869656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.869686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.869877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.869907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.870237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.870275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.870461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.870491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.870745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.870775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.871098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.871127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.871342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.871372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.871576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.871607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.871812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.871842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.872050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.872079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.872262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.872293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.872591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.872622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.872903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.872933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.873132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.873161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.873445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.873475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.873631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.873660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.631 qpair failed and we were unable to recover it. 00:35:47.631 [2024-07-13 01:00:58.873934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.631 [2024-07-13 01:00:58.873964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.874276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.874307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.874491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.874522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.874650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.874680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.874948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.874977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.875205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.875244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.875434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.875464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.875620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.875650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.875867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.875897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.876098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.876129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.876365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.876398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.876664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.876694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.876850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.876879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.877073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.877104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.877334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.877365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.877512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.877541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.877692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.877721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.877918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.877948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.878072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.878101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.878290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.878321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.878525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.878554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.878740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.878770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.879060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.879090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.879291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.879321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.879515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.879545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.879750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.879779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.880053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.880084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.880292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.880323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.880530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.880560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.880835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.880865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.881163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.881193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.881483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.881514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.881720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.881750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.632 [2024-07-13 01:00:58.881984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.632 [2024-07-13 01:00:58.882013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.632 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.882124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.882153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.882407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.882438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.882638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.882667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.882801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.882831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.883091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.883120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.883365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.883396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.883604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.883634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.883851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.883881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.884134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.884163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.884298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.884329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.884514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.884544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.884740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.884770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.884998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.885028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.885335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.885365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.885621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.885651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.885922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.885951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.886135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.886164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.886341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.886371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.886647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.886677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.886901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.886930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.887148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.887184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.887406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.887437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.887577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.887606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.887880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.887910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.888172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.888201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.888455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.888486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.888636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.888666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.888869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.888898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.889097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.889127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.889274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.889304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.889502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.889532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.889671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.889701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.889835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.889864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.889986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.890015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.890216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.890275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.890462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.890491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.890638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.890667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.890788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.890818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.891031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.891061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.891315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.891346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.633 [2024-07-13 01:00:58.891548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.633 [2024-07-13 01:00:58.891579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.633 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.891887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.891917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.892189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.892219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.892514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.892545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.892772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.892802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.892925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.892955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.893171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.893201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.893494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.893530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.893718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.893748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.893975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.894003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.894126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.894156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.894358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.894389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.894665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.894695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.895004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.895033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.895243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.895274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.895572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.895602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.895829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.895859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.896042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.896072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.896353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.896383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.896666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.896695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.897032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.897062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.897356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.897386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.897581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.897611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.897815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.897845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.898126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.898157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.898338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.898369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.898647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.898677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.898810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.898840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.899088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.899118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.899248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.899279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.899473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.899503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.899638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.899667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.899880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.899910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.900026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.900056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.900356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.900387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.900643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.900673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.900896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.900925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.901178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.901207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.901532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.901562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.901816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.901846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.902070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.902100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.634 [2024-07-13 01:00:58.902337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.634 [2024-07-13 01:00:58.902368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.634 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.902527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.902556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.902750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.902781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.903079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.903109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.903293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.903323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.903524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.903554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.903828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.903858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.903996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.904026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.904258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.904289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.904484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.904513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.904654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.904683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.904941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.904972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.905176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.905206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.905449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.905479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.905756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.905786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.906008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.906038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.906323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.906355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.906558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.906589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.906783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.906813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.906995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.907025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.907221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.907261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.907403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.907433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.907649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.907678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.907808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.907838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.908116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.908146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.908294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.908325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.908520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.908550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.908734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.908765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.908982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.909011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.909198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.909235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.909489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.909520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.909772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.909802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.910060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.910090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.910294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.910326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.910472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.910507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.910714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.910745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.910947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.910977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.911257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.911289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.911436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.911466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.911733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.911763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.912002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.912032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.912269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.635 [2024-07-13 01:00:58.912300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.635 qpair failed and we were unable to recover it. 00:35:47.635 [2024-07-13 01:00:58.912578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.912608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.912789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.912820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.913100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.913130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.913346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.913377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.913570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.913600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.913821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.913851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.914152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.914183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.914391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.914421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.914608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.914637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.914939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.914969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.915156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.915185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.915381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.915413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.915698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.915728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.915867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.915897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.916080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.916110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.916277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.916308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.916469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.916498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.916690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.916720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.916977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.917006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.917138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.917173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.917391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.917421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.917696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.917727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.917994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.918023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.918289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.918320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.918532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.918565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.918772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.918802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.919011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.919041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.919253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.919286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.919403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.919434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.919697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.919727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.919929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.919960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.920184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.920215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.920356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.920387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.920530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.920561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.920761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.920791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.636 [2024-07-13 01:00:58.920926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.636 [2024-07-13 01:00:58.920956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.636 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.921246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.921277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.921422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.921452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.921596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.921626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.921740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.921771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.922072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.922102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.922244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.922276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.922533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.922564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.922733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.922763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.923059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.923088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.923363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.923394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.923598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.923633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.923833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.923863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.924063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.924093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.924304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.924336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.924557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.924587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.924787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.924816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.925015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.925044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.925190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.925221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.925487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.925517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.925714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.925743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.925979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.926008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.926213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.926252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.926453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.926484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.926654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.926683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.926943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.926978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.927287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.927318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.927476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.927506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.927653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.927683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.927833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.927865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.928074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.928104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.928307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.928338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.928595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.928625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.928829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.928859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.929123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.929152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.929358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.929388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.929586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.929616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.929756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.929785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.929985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.930015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.930281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.930313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.930526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.930556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.930822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.637 [2024-07-13 01:00:58.930851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.637 qpair failed and we were unable to recover it. 00:35:47.637 [2024-07-13 01:00:58.931056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.931086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.931296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.931326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.931535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.931565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.931847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.931877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.932171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.932200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.932374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.932404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.932657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.932688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.932890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.932920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.933056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.933086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.933290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.933320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.933527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.933557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.933810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.933839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.934038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.934068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.934299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.934329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.934470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.934500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.934755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.934785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.935054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.935084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.935390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.935421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.935544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.935573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.935777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.935807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.936018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.936048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.936328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.936358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.936609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.936639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.936863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.936893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.937156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.937185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.937355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.937385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.937604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.937633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.937784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.937814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.938043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.938073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.938271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.938301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.938552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.938581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.938862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.938894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.939150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.939180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.939421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.939452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.939590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.939621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.939758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.939788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.940081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.940111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.940370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.940407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.940632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.940663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.940878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.940907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.638 qpair failed and we were unable to recover it. 00:35:47.638 [2024-07-13 01:00:58.941212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.638 [2024-07-13 01:00:58.941251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.941456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.941486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.941669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.941699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.941910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.941939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.942235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.942267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.942410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.942440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.942645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.942675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.942870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.942900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.943150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.943179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.943476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.943507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.943714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.943745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.943992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.944022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.944218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.944260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.944445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.944474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.944726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.944757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.944982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.945011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.945259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.945290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.945499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.945530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.945730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.945760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.946041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.946071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.946335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.946365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.946573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.946604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.946750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.946780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.946990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.947019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.947274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.947310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.947463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.947493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.947683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.947713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.947923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.947953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.948151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.948181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.948331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.948361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.948617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.948646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.948902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.948932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.949182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.949212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.949385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.949415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.949555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.949584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.949722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.949751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.950049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.950078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.950359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.950390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.950603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.950632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.950787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.950817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.951087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.951116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.639 [2024-07-13 01:00:58.951375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.639 [2024-07-13 01:00:58.951405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.639 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.951640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.951670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.951827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.951857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.952111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.952140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.952416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.952446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.952629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.952659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.952814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.952844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.952969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.952998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.953214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.953252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.953458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.953487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.953638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.953668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.953938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.953968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.954249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.954280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.954488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.954517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.954720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.954750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.955024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.955053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.955261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.955292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.955516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.955546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.955754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.955784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.956083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.956113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.956401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.956432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.956637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.956667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.956823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.956852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.957083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.957112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.957364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.957395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.957657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.957688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.957905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.957934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.958214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.958255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.958454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.958484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.958688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.958717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.958959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.958989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.959290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.959321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.959524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.959553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.959754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.959783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.959991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.960021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.960217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.960256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.960440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.960471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.960724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.960754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.640 [2024-07-13 01:00:58.960981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.640 [2024-07-13 01:00:58.961011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.640 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.961273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.961303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.961504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.961534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.961685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.961715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.961917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.961947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.962199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.962235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.962420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.962450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.962743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.962772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.963055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.963085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.963288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.963320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.963483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.963512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.963693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.963722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.964008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.964037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.964242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.964278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.964480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.964510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.964691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.964721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.964882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.964912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.965144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.965174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.965506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.965537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.965686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.965716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.965969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.965999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.966283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.966313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.966468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.966497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.966638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.966668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.966882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.966912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.967183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.967213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.967414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.967444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.967656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.967685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.967946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.967975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.968173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.968203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.968425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.968455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.968578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.968608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.968792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.968821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.969045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.969076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.969265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.969295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.969570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.969600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.969783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.969812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.970089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.970118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.970321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.970352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.641 [2024-07-13 01:00:58.970551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.641 [2024-07-13 01:00:58.970581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.641 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.970775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.970809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.971015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.971045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.971266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.971296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.971601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.971632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.971831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.971860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.972078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.972108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.972325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.972355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.972475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.972506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.972768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.972798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.973137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.973167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.973395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.973425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.973700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.973730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.974007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.974036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.974254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.974285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.974464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.974495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.974701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.974731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.974943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.974973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.975250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.975281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.975440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.975470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.975613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.975642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.975772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.975801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.976081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.976111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.976404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.976434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.976650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.976680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.976953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.976983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.977244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.977275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.977463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.977493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.977607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.977642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.977837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.977867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.978062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.978091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.978363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.978394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.978597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.978626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.978921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.978950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.979134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.979163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.979290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.979320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.979581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.979612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.979964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.979994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.980193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.980223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.980434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.980464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.980666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.980696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.980846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.642 [2024-07-13 01:00:58.980876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.642 qpair failed and we were unable to recover it. 00:35:47.642 [2024-07-13 01:00:58.981134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.981164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.981340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.981370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.981581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.981612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.981763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.981792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.982112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.982141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.982333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.982364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.982616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.982647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.982950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.982980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.983192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.983221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.983451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.983481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.983636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.983665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.983952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.983982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.984210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.984249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.984534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.984564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.984792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.984823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.985107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.985140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.985350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.985380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.985634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.985666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.985878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.985909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.986095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.986124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.986308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.986338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.986566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.986596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.986926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.986956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.987135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.987164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.987376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.987407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.987673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.987702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.987914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.987943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.988234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.988265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.988423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.988453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.988651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.988681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.988877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.988906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.989124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.989153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.989348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.989378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.989582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.989611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.989820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.989850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.989996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.990026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.990239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.990270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.990495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.990525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.990719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.990748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.991040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.991070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.991262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.643 [2024-07-13 01:00:58.991292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.643 qpair failed and we were unable to recover it. 00:35:47.643 [2024-07-13 01:00:58.991555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.991585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.991737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.991767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.991962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.991991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.992266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.992297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.992520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.992550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.992749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.992779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.993040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.993069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.993373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.993403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.993679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.993709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.993859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.993887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.994164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.994193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.994412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.994443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.994719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.994748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.994864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.994898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.995178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.995207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.995373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.995403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.995630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.995659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.995969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.995998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.996268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.996299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.996496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.996526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.996791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.996821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.997121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.997150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.997385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.997416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.997633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.997663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.997930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.997960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.998175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.998205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.998369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.998399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.998686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.998716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.998984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.999014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.999315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.999346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.999623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.999653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:58.999952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:58.999982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:59.000263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:59.000294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:59.000589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:59.000620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:59.000926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:59.000956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:59.001236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:59.001266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:59.001471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:59.001501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:59.001715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:59.001746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:59.002061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:59.002092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:59.002361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:59.002392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:59.002526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:59.002561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.644 qpair failed and we were unable to recover it. 00:35:47.644 [2024-07-13 01:00:59.002853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.644 [2024-07-13 01:00:59.002883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.003067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.003097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.003306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.003336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.003614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.003644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.003844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.003874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.004072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.004102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.004388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.004419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.004706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.004737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.005041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.005071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.005303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.005334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.005563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.005593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.005894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.005923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.006201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.006240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.006436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.006467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.006625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.006654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.006840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.006870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.007074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.007104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.007382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.007413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.007550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.007580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.007800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.007830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.008056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.008086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.008380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.008410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.008610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.008639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.008782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.008812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.009068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.009099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.009354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.009385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.009687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.009716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.009935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.009966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.010237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.010268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.010478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.010507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.010760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.010789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.010990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.011020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.011286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.011316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.011608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.011638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.011849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.011879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.012060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.645 [2024-07-13 01:00:59.012090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.645 qpair failed and we were unable to recover it. 00:35:47.645 [2024-07-13 01:00:59.012340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.012371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.012573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.012603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.012799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.012829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.013134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.013164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.013449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.013480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.013736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.013765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.013960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.013989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.014173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.014202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.014494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.014525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.014739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.014769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.015050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.015079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.015286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.015316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.015522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.015551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.015742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.015772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.015975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.016005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.016139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.016169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.016498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.016529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.016798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.016827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.017027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.017057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.017254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.017285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.017471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.017501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.017777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.017807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.018003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.018032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.018311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.018341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.018501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.018532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.018835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.018865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.019002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.019032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.019309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.019340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.019600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.019632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.019877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.019907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.020049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.020079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.020363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.020401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.020553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.020583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.020886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.020916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.021124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.021156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.021434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.021465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.021746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.021776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.021991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.022020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.022246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.022278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.022486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.022516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.022700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.022729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.646 qpair failed and we were unable to recover it. 00:35:47.646 [2024-07-13 01:00:59.022864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.646 [2024-07-13 01:00:59.022893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.023163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.023194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.023351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.023382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.023537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.023567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.023761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.023792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.023995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.024024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.024304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.024336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.024540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.024569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.024824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.024853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.025110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.025140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.025346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.025376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.025654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.025684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.025808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.025838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.026042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.026072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.026299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.026330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.026542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.026571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.026789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.026819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.027003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.027039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.027305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.027336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.027623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.027652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.027847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.027877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.028008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.028038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.028298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.028330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.028621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.028651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.028939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.028969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.029259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.029290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.029579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.029610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.029863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.029893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.030149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.030183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.030401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.030431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.030712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.030742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.031000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.031032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.031311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.031341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.031630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.031666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.031884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.031914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.032197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.032236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.032364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.032394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.032593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.032623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.032898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.032929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.033210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.033250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.033506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.033536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.647 [2024-07-13 01:00:59.033835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.647 [2024-07-13 01:00:59.033865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.647 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.034074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.034105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.034299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.034330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.034601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.034636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.034932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.034962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.035246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.035277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.035488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.035517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.035815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.035845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.036032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.036062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.036386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.036417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.036616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.036646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.036927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.036956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.037248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.037279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.037481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.037510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.037782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.037813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.038013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.038043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.038296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.038326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.038642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.038673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.038952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.038981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.039281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.039313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.039517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.039547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.039731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.039761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.040017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.040048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.040302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.040332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.040558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.040588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.040797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.040828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.041015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.041044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.041323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.041353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.041654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.041684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.041914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.041944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.042206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.042258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.042553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.042583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.042768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.042798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.043004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.043035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.043256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.043288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.043591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.043620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.043818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.043848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.044070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.044100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.044361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.044392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.044644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.044674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.044899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.648 [2024-07-13 01:00:59.044928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.648 qpair failed and we were unable to recover it. 00:35:47.648 [2024-07-13 01:00:59.045186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.045216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.045454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.045485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.045668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.045698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.045971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.046047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.046302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.046341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.046628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.046661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.046958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.046990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.047202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.047243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.047430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.047461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.047717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.047747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.048026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.048056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.048267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.048299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.048485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.048515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.048734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.048770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.049044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.049074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.049279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.049310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.049511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.049541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.049826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.049856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.050164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.050194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.050463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.050495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.050716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.050746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.050953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.050984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.051186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.051215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.051457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.051487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.051681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.051712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.051977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.052007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.052197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.052238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.052521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.052550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.052741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.052771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.052955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.052986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.053188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.053219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.053511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.053541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.053798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.053828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.054079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.054109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.054302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.054333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.054534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.649 [2024-07-13 01:00:59.054565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.649 qpair failed and we were unable to recover it. 00:35:47.649 [2024-07-13 01:00:59.054765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.054795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.055060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.055091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.055310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.055342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.055482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.055513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.055737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.055768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.056060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.056090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.056279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.056310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.056580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.056616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.056905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.056936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.057219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.057258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.057457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.057488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.057765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.057796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.057964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.057995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.058220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.058272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.058458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.058488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.058788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.058818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.059018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.059048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.059250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.059282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.059486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.059517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.059788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.059817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.060072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.060102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.060320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.060352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.060545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.060575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.060829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.060859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.061140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.061171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.061368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.061400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.061586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.061616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.061837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.061867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.062125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.062155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.062416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.062448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.062705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.062735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.062989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.063018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.063248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.063280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.063469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.063499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.063715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.063745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.063935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.063965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.064161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.064190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.064488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.064520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.064805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.064835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.065030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.065060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.650 [2024-07-13 01:00:59.065343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.650 [2024-07-13 01:00:59.065374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.650 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.065642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.065671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.065882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.065912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.066182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.066212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.066485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.066517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.066819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.066850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.067061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.067091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.067294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.067330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.067587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.067617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.067867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.067897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.068082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.068113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.068394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.068424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.068547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.068577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.068867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.068896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.069184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.069214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.069504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.069535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.069687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.069716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.069997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.070027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.070214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.070256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.070560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.070590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.070794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.070824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.071105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.071135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.071421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.071452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.071736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.071766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.071988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.072018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.072243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.072275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.072529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.072559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.072754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.072784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.073004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.073034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.073300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.073331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.073583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.073613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.073894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.073924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.074176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.074207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.074499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.074530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.074815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.074845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.075128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.075158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.075456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.075488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.075701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.075731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.075931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.075960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.076156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.076186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.076489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.651 [2024-07-13 01:00:59.076521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.651 qpair failed and we were unable to recover it. 00:35:47.651 [2024-07-13 01:00:59.076794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.076824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.077123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.077153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.077448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.077480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.077764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.077794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.077979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.078009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.078217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.078255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.078511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.078546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.078727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.078757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.078983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.079013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.079289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.079321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.079603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.079634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.079896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.079925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.080178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.080208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.080432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.080462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.080744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.080774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.080972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.081002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.081258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.081290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.081591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.081622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.081837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.081867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.082072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.082103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.082415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.082447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.082711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.082742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.082872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.082902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.083175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.083205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.083505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.083536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.083732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.083762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.083942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.083972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.084239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.084270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.084578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.084608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.084934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.084965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.085247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.085278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.085560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.085591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.085775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.085805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.086089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.086119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.086399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.086431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.086695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.086725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.086931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.086961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.087266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.087299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.087529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.087560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.087762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.087792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.088047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.652 [2024-07-13 01:00:59.088076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.652 qpair failed and we were unable to recover it. 00:35:47.652 [2024-07-13 01:00:59.088353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.088384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.088592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.088623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.088875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.088906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.089113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.089143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.089419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.089450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.089658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.089694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.089898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.089928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.090198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.090239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.090476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.090505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.090718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.090748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.090954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.090984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.091254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.091285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.091426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.091456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.091680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.091710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.092014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.092044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.092339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.092371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.092652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.092682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.092977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.093006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.093311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.093343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.093550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.093580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.093800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.093830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.094011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.094041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.094319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.094350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.094605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.094636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.094890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.094920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.095116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.095145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.095350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.095382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.095585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.095616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.095837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.095866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.096072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.096102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.096303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.096336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.096451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.096480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.096598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.096628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.096770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.096799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.096994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.097023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.653 [2024-07-13 01:00:59.097247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.653 [2024-07-13 01:00:59.097279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.653 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.097555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.097584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.097773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.097802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.098026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.098056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.098313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.098344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.098551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.098579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.098830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.098859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.099133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.099164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.099378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.099410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.099673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.099705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.099995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.100036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.100320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.100360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.100623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.100656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.100894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.100927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.101143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.101176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.101389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.101422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.101589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.101628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.101831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.101865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.102097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.102134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.102400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.102434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.102714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.102747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.102878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.102909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.103186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.103221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.103391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.103423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.103647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.103681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.103821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.103862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.104092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.104127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.104347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.104385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.104694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.104727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.104896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.104929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.105121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.105159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.105394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.105428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.105576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.105611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.105894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.105934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.106208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.106253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.106532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.106571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.106771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.106801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.107065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.107095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.107376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.107407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.107669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.107699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.107895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.654 [2024-07-13 01:00:59.107924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.654 qpair failed and we were unable to recover it. 00:35:47.654 [2024-07-13 01:00:59.108124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.108154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.108432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.108463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.108744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.108774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.108994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.109024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.109299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.109330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.109627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.109657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.109855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.109885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.110186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.110216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.110511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.110541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.110736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.110772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.111027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.111057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.111336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.111367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.111620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.111651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.111855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.111885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.112067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.112096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.112280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.112311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.112514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.112543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.112824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.112854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.113057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.113087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.113278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.113310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.113564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.113595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.113897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.113927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.114155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.114185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.114478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.114510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.114768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.114798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.115055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.115085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.115351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.115383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.115663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.115693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.115953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.115983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.116204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.116246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.116502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.116532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.116809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.116840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.117044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.117074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.117350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.117381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.117578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.117608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.117860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.117890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.118081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.118111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.118394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.118425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.118694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.118724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.118948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.118978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.119204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.655 [2024-07-13 01:00:59.119251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.655 qpair failed and we were unable to recover it. 00:35:47.655 [2024-07-13 01:00:59.119528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.119558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.119739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.119769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.120022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.120051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.120349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.120380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.120664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.120695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.120947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.120977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.121196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.121237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.121512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.121542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.121744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.121780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.122007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.122037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.122222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.122264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.122529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.122559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.122761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.122791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.122974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.123004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.123282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.123313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.123534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.123565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.123694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.123722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.123914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.123943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.124216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.124256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.124404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.124433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.124637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.124667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.124891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.124922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.125050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.125080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.125277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.125309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.125582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.125613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.125902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.125932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.126123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.126153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.126412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.126443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.126640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.126670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.126954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.126984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.127262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.127295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.127554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.127584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.127862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.127892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.128143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.128174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.128456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.128487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.128779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.128810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.129015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.129045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.129245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.129277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.129555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.129584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.129787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.129817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.130079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.656 [2024-07-13 01:00:59.130109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.656 qpair failed and we were unable to recover it. 00:35:47.656 [2024-07-13 01:00:59.130318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.130349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.130547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.130577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.130832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.130862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.131144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.131174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.131328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.131360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.131555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.131585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.131859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.131888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.132033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.132068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.132252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.132283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.132534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.132565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.132840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.132870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.133172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.133201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.133418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.133448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.133667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.133697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.133976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.134006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.134284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.134315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.134608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.134638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.134866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.134896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.135084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.135114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.135370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.135401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.135657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.135687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.135978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.136009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.136297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.136328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.136539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.136569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.136770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.136799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.137011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.137041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.137316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.137348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.137643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.137673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.137872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.137902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.138103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.138133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.138385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.138415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.138677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.138707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.138928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.138959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.139161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.139190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.139500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.139532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.139805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.139835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.140139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.140168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.140445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.140477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.140696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.140726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.141000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.141031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.657 [2024-07-13 01:00:59.141302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.657 [2024-07-13 01:00:59.141333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.657 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.141537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.141567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.141823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.141853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.142073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.142103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.142301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.142332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.142515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.142545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.142799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.142829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.143080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.143116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.143371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.143402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.143703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.143733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.144014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.144043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.144257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.144288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.144544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.144575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.144831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.144861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.145113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.145142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.145396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.145428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.145686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.145716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.145901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.145931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.146131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.146160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.146344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.146375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.146626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.146656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.146945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.146975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.147242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.147274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.147545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.147575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.147856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.147886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.148069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.148099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.148373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.148405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.148606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.148635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.148907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.148936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.149127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.149156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.149359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.149390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.149593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.149623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.149877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.149907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.150126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.150156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.150417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.658 [2024-07-13 01:00:59.150448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.658 qpair failed and we were unable to recover it. 00:35:47.658 [2024-07-13 01:00:59.150707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.150737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.150956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.150985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.151305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.151336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.151558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.151588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.151793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.151824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.151939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.151969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.152155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.152184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.152422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.152453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.152755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.152785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.153004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.153034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.153311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.153342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.153643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.153673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.153970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.154005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.154283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.154314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.154613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.154643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.154922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.154951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.155218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.155270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.155549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.155579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.155788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.155817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.156068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.156097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.156375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.156406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.156614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.156645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.156828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.156857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.157063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.157093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.157367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.157399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.157607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.157637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.157876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.157906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.158195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.158232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.158542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.158572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.158866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.158896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.159172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.159202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.159506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.159537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.159752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.159782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.160056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.160086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.160336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.160367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.160637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.160667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.160949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.160980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.161180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.161210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.161475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.161505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.161809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.161840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.162113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.659 [2024-07-13 01:00:59.162142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.659 qpair failed and we were unable to recover it. 00:35:47.659 [2024-07-13 01:00:59.162413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.162445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.162703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.162733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.163009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.163038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.163267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.163299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.163573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.163604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.163869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.163899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.164178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.164207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.164523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.164555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.164762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.164792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.165067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.165097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.165245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.165276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.165548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.165585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.165699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.165730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.165932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.165962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.166215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.166257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.166452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.166483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.166756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.166786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.167054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.167084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.167306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.167337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.167468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.167499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.167752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.167782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.168058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.168088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.168382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.168414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.168672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.168703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.168945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.168976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.169170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.169199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.169484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.169515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.169783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.169813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.170007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.170037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.170233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.170265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.170522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.170553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.170804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.170835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.171021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.171050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.171266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.171298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.171578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.171608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.171911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.171941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.172143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.172173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.172386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.172418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.172680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.172710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.660 [2024-07-13 01:00:59.172978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.660 [2024-07-13 01:00:59.173008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.660 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.173203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.173243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.173470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.173500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.173636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.173666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.173972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.174003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.174223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.174262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.174398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.174427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.174629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.174659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.174842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.174871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.175127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.175157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.175445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.175476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.175734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.175764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.176013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.176049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.176276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.176308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.176583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.176614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.176902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.176932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.177119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.177148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.177421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.177453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.177717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.177747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.177933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.177963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.178259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.178290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.933 qpair failed and we were unable to recover it. 00:35:47.933 [2024-07-13 01:00:59.178472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.933 [2024-07-13 01:00:59.178502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.178706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.178736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.178985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.179014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.179285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.179316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.179623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.179653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.179870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.179901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.180178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.180207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.180481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.180513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.180814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.180844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.181132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.181162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.181294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.181325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.181524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.181553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.181758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.181788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.182067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.182097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.182352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.182383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.182594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.182624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.182893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.182922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.183173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.183203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.183521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.183553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.183813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.183843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.184154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.184184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.184487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.184519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.184820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.184850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.185127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.185157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.185456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.185488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.185684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.185714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.186014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.186044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.186321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.186352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.186568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.186597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.186779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.186808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.187002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.187032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.187295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.187326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.187628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.187659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.187885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.187915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.188129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.188159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.188345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.188376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.188648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.188678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.188960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.188989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.189257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.934 [2024-07-13 01:00:59.189289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.934 qpair failed and we were unable to recover it. 00:35:47.934 [2024-07-13 01:00:59.189580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.189610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.189893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.189923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.190219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.190258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.190444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.190474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.190740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.190770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.190964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.190994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.191254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.191285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.191490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.191520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.191773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.191803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.192082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.192112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.192412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.192444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.192718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.192750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.192951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.192980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.193182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.193213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.193416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.193446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.193648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.193677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.193958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.193987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.194246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.194277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.194459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.194489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.194629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.194665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.194946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.194976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.195238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.195269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.195543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.195573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.195853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.195883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.196149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.196180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.196482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.196513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.196714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.196744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.197016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.197046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.197201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.197255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.197537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.197567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.197844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.197874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.198174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.198204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.198483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.198514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.198718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.198747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.199012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.199041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.199317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.199349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.199646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.199677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.199880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.199909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.200178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.200207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.200505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.935 [2024-07-13 01:00:59.200536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.935 qpair failed and we were unable to recover it. 00:35:47.935 [2024-07-13 01:00:59.200744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.200773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.201073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.201103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.201313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.201344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.201621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.201651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.201902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.201932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.202200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.202239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.202465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.202496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.202747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.202776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.202979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.203008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.203289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.203321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.203538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.203568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.203820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.203850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.204057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.204087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.204365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.204397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.204676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.204706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.204997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.205027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.205177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.205208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.205499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.205529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.205733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.205762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.206038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.206074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.206278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.206309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.206588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.206619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.206898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.206927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.207062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.207092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.207250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.207281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.207560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.207590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.207772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.207802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.208013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.208042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.208320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.208351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.208638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.208667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.208817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.208847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.209147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.209177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.209502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.209535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.209818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.209848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.210101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.210130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.210398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.210429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.210645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.210675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.210830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.210860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.211122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.211152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.211363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.211395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.936 qpair failed and we were unable to recover it. 00:35:47.936 [2024-07-13 01:00:59.211693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.936 [2024-07-13 01:00:59.211723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.212001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.212031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.212247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.212278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.212529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.212559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.212820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.212850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.212963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.212992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.213252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.213283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.213567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.213597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.213877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.213906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.214126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.214156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.214412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.214443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.214626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.214656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.214856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.214886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.215165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.215195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.215407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.215438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.215739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.215769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.216050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.216079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.216332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.216364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.216547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.216576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.216855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.216891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.217163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.217193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.217482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.217513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.217800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.217830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.218143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.218173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.218439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.218470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.218770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.218801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.219077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.219107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.219404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.219435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.219732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.219762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.220037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.220067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.220279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.220309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.937 qpair failed and we were unable to recover it. 00:35:47.937 [2024-07-13 01:00:59.220541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.937 [2024-07-13 01:00:59.220571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.220825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.220855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.221119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.221149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.221329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.221361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.221661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.221690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.221971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.222001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.222213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.222251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.222529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.222558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.222837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.222866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.223107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.223137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.223376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.223406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.223679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.223709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.224005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.224034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.224218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.224266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.224520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.224551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.224773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.224803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.224986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.225016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.225289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.225321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.225516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.225546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.225814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.225844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.226146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.226176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.226461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.226492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.226772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.226802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.227095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.227125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.227407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.227438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.227660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.227691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.227944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.227973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.228247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.228278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.228581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.228617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.228799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.228828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.229115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.229145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.229416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.229448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1619804 Killed "${NVMF_APP[@]}" "$@" 00:35:47.938 [2024-07-13 01:00:59.229666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.229697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.229833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.229864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.230117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.230148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 [2024-07-13 01:00:59.230457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.230488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:47.938 [2024-07-13 01:00:59.230675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.230706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:47.938 [2024-07-13 01:00:59.230982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.231013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:47.938 [2024-07-13 01:00:59.231286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.938 [2024-07-13 01:00:59.231318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.938 qpair failed and we were unable to recover it. 00:35:47.938 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.939 [2024-07-13 01:00:59.231519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.231550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.231851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.231881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.232149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.232179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.232427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.232459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.232737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.232769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.233030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.233060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.233314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.233346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.233573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.233603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.233806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.233836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.234034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.234065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.234319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.234351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.234652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.234683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.235020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.235051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.235256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.235294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.235604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.235636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.235854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.235885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.236153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.236181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.236476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.236509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.236790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.236820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.237002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.237032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.237250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.237282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.237477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.237507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.237780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.237810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.238022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.238052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.238307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.238337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1620678 00:35:47.939 [2024-07-13 01:00:59.238540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.238575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.238845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1620678 00:35:47.939 [2024-07-13 01:00:59.238886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:47.939 [2024-07-13 01:00:59.239048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.239079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1620678 ']' 00:35:47.939 [2024-07-13 01:00:59.239367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.239400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.939 [2024-07-13 01:00:59.239655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.239689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:47.939 [2024-07-13 01:00:59.239888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.239923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.939 [2024-07-13 01:00:59.240218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.240265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:47.939 [2024-07-13 01:00:59.240544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.240581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:47.939 [2024-07-13 01:00:59.240838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.240870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.241180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.241212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.241480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.241511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.939 qpair failed and we were unable to recover it. 00:35:47.939 [2024-07-13 01:00:59.241681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.939 [2024-07-13 01:00:59.241711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.241969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.242000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.242260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.242298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.242493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.242524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.242796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.242827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.243106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.243137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.243366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.243396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.243625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.243654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.243821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.243853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.244081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.244111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.244375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.244411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.244601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.244632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.244846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.244876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.245164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.245195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.245449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.245481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.245639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.245670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.245945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.245975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.246177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.246207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.246424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.246455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.246574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.246605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.246749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.246778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.246964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.246998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.247265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.247297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.247503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.247533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.247669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.247699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.247985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.248015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.248205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.248270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.248561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.248591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.248871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.248902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.249184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.249215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.249373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.249403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.249602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.249632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.249820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.249852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.249987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.250017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.250245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.250275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.250566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.250599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.250795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.250825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.251029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.251060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.251259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.251292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.251570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.251601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.940 qpair failed and we were unable to recover it. 00:35:47.940 [2024-07-13 01:00:59.251796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.940 [2024-07-13 01:00:59.251827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.252018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.252050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.252267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.252301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.252501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.252531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.252734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.252765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.253055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.253086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.253375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.253406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.253651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.253681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.253877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.253909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.254094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.254128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.254290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.254320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.254534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.254566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.254870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.254902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.255107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.255139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.255408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.255441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.255578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.255607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.255812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.255841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.256066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.256099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.256246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.256276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.256410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.256440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.256705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.256736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.256940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.256969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.257175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.257206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.257497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.257530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.257801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.257833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.258108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.258139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.258365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.258401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.258597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.258627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.258904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.258934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.259129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.259158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.259383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.259415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.259687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.259717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.259928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.259958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.260261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.260295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.260500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.260531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.260672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.260702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.260924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.260955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.261193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.261222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.261432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.261461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.261580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.261610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.261740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.941 [2024-07-13 01:00:59.261768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.941 qpair failed and we were unable to recover it. 00:35:47.941 [2024-07-13 01:00:59.261903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.261934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.262079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.262109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.262338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.262370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.262517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.262546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.262742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.262772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.263028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.263058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.263251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.263282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.263511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.263542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.263675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.263703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.263910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.263938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.264143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.264172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.264391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.264421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.264561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.264592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.264776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.264807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.264932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.264966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.265121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.265153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.265299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.265333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.265538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.265568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.265753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.265783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.265923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.265951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.266187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.266221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.266488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.266521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.266726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.266756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.266897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.266929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.267070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.267105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.267324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.267361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.267550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.267579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.267736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.267766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.267973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.268008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.268209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.268259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.268468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.268499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.268628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.268658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.268857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.268887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.269110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.269142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.269337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.942 [2024-07-13 01:00:59.269367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.942 qpair failed and we were unable to recover it. 00:35:47.942 [2024-07-13 01:00:59.269495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.269526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.269762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.269792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.269942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.269973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.270153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.270182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.270351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.270381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.270577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.270606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.270733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.270762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.270972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.271003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.271260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.271291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.271423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.271454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.271692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.271723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.271995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.272026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.272213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.272254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.272380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.272410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.272594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.272623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.272824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.272855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.272981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.273012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.273237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.273270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.273400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.273429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.273688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.273720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.273921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.273952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.274154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.274184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.274448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.274479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.274700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.274730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.274856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.274886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.275151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.275181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.275327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.275358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.275496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.275526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.275727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.275757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.276011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.276041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.276267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.276305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.276533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.276564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.276838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.276869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.277007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.277038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.277171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.277201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.277338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.277368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.277549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.277580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.277829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.277859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.278044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.278074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.278221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.278265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.943 [2024-07-13 01:00:59.278463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.943 [2024-07-13 01:00:59.278493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.943 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.278695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.278724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.278922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.278950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.279136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.279167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.279392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.279424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.279705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.279734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.279999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.280029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.280275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.280307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.280594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.280625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.280900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.280929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.281044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.281075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.281305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.281342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.281542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.281572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.281715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.281744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.281860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.281889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.282100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.282131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.282355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.282387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.282646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.282677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.282826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.282856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.282995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.283025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.283205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.283250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.283536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.283566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.283705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.283735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.283892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.283921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.284122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.284150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.284367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.284399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.284561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.284590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.284864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.284895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.285169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.285200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.285395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.285425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.285551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.285586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.285814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.285844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.286044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.286073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.286196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.286238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.286437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.286468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.286652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.286659] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:47.944 [2024-07-13 01:00:59.286683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.286714] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:47.944 [2024-07-13 01:00:59.286871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.286903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.287025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.287052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.287307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.287336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.287520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.944 [2024-07-13 01:00:59.287549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.944 qpair failed and we were unable to recover it. 00:35:47.944 [2024-07-13 01:00:59.287731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.287759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.287954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.287988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.288123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.288153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.288294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.288327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.288468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.288500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.288646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.288679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.288893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.288926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.289075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.289107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.289365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.289398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.289583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.289614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.289919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.289953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.290241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.290280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.290424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.290458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.290567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.290598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.290782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.290814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.291085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.291115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.291395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.291426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.291550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.291580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.291785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.291815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.292016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.292045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.292173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.292202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.292372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.292401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.292596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.292627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.292768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.292797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.292994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.293023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.293160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.293189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.293417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.293449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.293712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.293742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.293939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.293969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.294097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.294140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.294276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.294306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.294597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.294629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.294893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.294924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.295175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.295205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.295348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.295379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.295595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.295625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.295845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.295878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.295989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.296019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.296146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.296184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.296377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.296408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.945 [2024-07-13 01:00:59.296589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.945 [2024-07-13 01:00:59.296622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.945 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.296816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.296848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.296982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.297014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.297164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.297193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.297470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.297513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.297646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.297677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.297957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.297990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.298247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.298279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.298408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.298437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.298646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.298680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.298881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.298910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.299095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.299126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.299343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.299375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.299655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.299687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.299823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.299853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.299989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.300019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.300217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.300270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.300424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.300462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.300674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.300704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.300920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.300953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.301254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.301286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.301416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.301448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.301579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.301611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.301868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.301898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.302108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.302137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.302264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.302295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.302483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.302513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.302714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.302744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.302928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.302959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.303159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.303197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.303342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.303372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.303495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.303526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.303723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.303754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.303867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.303896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.304141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.304169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.946 [2024-07-13 01:00:59.304394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.946 [2024-07-13 01:00:59.304424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.946 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.304567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.304598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.304777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.304807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.305002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.305033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.305207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.305251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.305360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.305390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.305499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.305530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.305726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.305755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.305944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.305973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.306236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.306266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.306451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.306480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.306609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.306638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.306757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.306787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.306897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.306927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.307103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.307139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.307278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.307309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.307433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.307463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.307759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.307796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.308048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.308079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.308295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.308327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.308538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.308567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.308850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.308882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.309032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.309063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.309286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.309318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.309494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.309526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.309709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.309742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.309941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.309972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.310182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.310211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.310339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.310368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.310566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.310597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.310817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.310846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.311033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.311065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.311195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.311237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.311439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.311468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.311607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.311644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.311753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.311783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.312050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.312082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.312190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.312219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.312348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.312378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.312639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.312668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.312857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.312888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.947 qpair failed and we were unable to recover it. 00:35:47.947 [2024-07-13 01:00:59.313037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.947 [2024-07-13 01:00:59.313067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.313256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.313288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.313420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.313450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.313649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.313680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.313889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.313919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.314097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.314127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.314345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.314376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.314670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.314702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.314835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.314865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.314982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.315011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.315203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.315241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.315384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.315414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.315594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.315626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.315832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.315864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.316140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.316170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.316315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.316348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.316467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.316499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.316679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.316710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.316928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.316958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.317147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.317177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.317310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.317341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.317551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.317582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.317826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.317856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.317974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.318004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.318246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.318277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.318398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.318427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.318534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 EAL: No free 2048 kB hugepages reported on node 1 00:35:47.948 [2024-07-13 01:00:59.318563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.318757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.318788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.319058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.319087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.319215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.319257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.319447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.319475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.319605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.319634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.319833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.319863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.320039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.320074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.320252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.320283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.320471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.320500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.320629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.320657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.320838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.320868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.321064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.321097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.321393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.321426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.321623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.948 [2024-07-13 01:00:59.321662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.948 qpair failed and we were unable to recover it. 00:35:47.948 [2024-07-13 01:00:59.321916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.321947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.322065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.322095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.322235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.322265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.322463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.322493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.322604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.322634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.322856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.322885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.323113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.323142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.323393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.323427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.323614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.323643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.323823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.323851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.324031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.324059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.324186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.324215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.324434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.324465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.324656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.324686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.324886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.324916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.325175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.325204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.325340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.325371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.325501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.325530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.325706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.325737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.325855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.325884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.326069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.326099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.326242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.326274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.326518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.326547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.326654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.326683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.326857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.326886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.327112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.327142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.327266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.327297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.327417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.327446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.327587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.327615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.327812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.327842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.328036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.328066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.328181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.328209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.328460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.328495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.328685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.328713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.328847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.328877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.329127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.329156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.329352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.329382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.329552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.329583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.329713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.329741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.330023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.330053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.949 [2024-07-13 01:00:59.330296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.949 [2024-07-13 01:00:59.330327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.949 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.330464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.330494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.330670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.330699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.330886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.330915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.331035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.331062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.331312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.331343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.331455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.331484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.331660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.331690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.331811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.331839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.331947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.331976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.332106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.332135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.332249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.332280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.332538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.332569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.332756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.332785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.332913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.332941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.333058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.333086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.333277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.333309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.333507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.333537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.333657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.333692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.333884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.333913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.334091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.334120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.334246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.334277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.334413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.334442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.334686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.334716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.334964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.334993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.335196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.335236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.335352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.335382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.335515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.335543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.335783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.335813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.336014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.336043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.336311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.336341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.336464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.336493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.336623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.336663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.336842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.336871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.337057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.950 [2024-07-13 01:00:59.337086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.950 qpair failed and we were unable to recover it. 00:35:47.950 [2024-07-13 01:00:59.337195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.337223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.337533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.337563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.337743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.337777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.337960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.337992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.338245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.338276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.338488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.338521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.338768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.338801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.339076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.339105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.339367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.339398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.339597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.339630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.339812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.339842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.340105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.340135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.340430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.340464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.340646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.340678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.340808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.340838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.341021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.341051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.341265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.341302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.341424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.341465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.341590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.341619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.341911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.341940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.342159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.342190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.342379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.342408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.342523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.342552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.342657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.342687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.342855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.342884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.343006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.343034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.343239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.343269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.343463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.343493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.343671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.343701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.343885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.343913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.344237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.344270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.344468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.344498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.344650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.344679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.344853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.344882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.345073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.345101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.345276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.345308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.345450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.345479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.345664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.345698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.345826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.345856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.346067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.951 [2024-07-13 01:00:59.346096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.951 qpair failed and we were unable to recover it. 00:35:47.951 [2024-07-13 01:00:59.346215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.346255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.346442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.346471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.346711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.346741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.347033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.347062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.347177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.347206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.347403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.347434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.347574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.347603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.347786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.347814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.348000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.348030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.348149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.348179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.348430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.348461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.348643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.348672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.348800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.348829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.349098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.349126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.349247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.349277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.349448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.349478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.349607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.349635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.349760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.349790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.350045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.350075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.350276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.350306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.350546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.350576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.350849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.350879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.351075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.351104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.351312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.351343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.351480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.351509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.351750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.351779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.351907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.351936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.352181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.352210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.352395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.352424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.352691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.352721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.352910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.352939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.353128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.353158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.353370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.353402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.353546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.353576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.353757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.353785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.353963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.353992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.354259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.354290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.354425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.354459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.354579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.354608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.354755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.354783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.952 [2024-07-13 01:00:59.354909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.952 [2024-07-13 01:00:59.354939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.952 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.355125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.355155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.355340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.355370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.355482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.355511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.355686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.355716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.355887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.355918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.356151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.356181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.356377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.356407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.356668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.356698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.356906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.356934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.357140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.357168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.357312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.357342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.357473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.357503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.357743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.357772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.357945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.357976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.358164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.358193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.358390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.358419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.358593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.358623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.358799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.358829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.359069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.359098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.359363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.359394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.359620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.359650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.359897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.359927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.360199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.360239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.360486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.360517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.360706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.360736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.360949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.360979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.361106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.361137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.361309] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:47.953 [2024-07-13 01:00:59.361323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.361351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.361529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.361558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.361769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.361798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.361975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.362006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.362189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.362219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.362469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.362501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.362771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.362801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.363013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.363044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.363236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.363267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.363531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.363601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.363769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.363837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.364029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fb60 is same with the state(5) to be set 00:35:47.953 [2024-07-13 01:00:59.364383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.364453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.953 [2024-07-13 01:00:59.364768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.953 [2024-07-13 01:00:59.364800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.953 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.365026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.365056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.365301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.365332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.365506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.365535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.365725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.365755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.366014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.366044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.366267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.366299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.366424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.366455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.366721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.366751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.366884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.366915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.367118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.367148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.367389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.367420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.367614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.367644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.367817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.367847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.368030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.368060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.368251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.368283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.368466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.368494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.368673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.368701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.368882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.368911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.369098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.369128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.369398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.369430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.369535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.369565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.369807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.369837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.369974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.370009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.370304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.370335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.370444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.370473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.370664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.370694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.370814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.370843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.370953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.370982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.371110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.371139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.371277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.371306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.371490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.371520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.371758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.371788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.371981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.372011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.372266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.372297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.372491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.372520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.372693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.372724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.372994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.373024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.373145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.373173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.373351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.373381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.373500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.954 [2024-07-13 01:00:59.373529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.954 qpair failed and we were unable to recover it. 00:35:47.954 [2024-07-13 01:00:59.373658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.373688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.373927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.373957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.374174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.374204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.374411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.374442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.374553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.374581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.374703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.374733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.374850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.374885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.375024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.375052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.375305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.375336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.375461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.375490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.375737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.375767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.376029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.376059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.376250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.376281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.376554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.376584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.376776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.376805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.376917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.376946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.377186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.377215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.377494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.377525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.377767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.377796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.378002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.378032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.378221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.378266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.378465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.378495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.378689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.378724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.378918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.378948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.379140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.379170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.379291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.379322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.379562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.379592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.379764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.379793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.380002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.380031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.380218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.380260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.380427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.380457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.380693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.380723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.380910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.380940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.381138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.381170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.381462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.381494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.955 [2024-07-13 01:00:59.381710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.955 [2024-07-13 01:00:59.381744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.955 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.382047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.382082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.382376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.382414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.382707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.382740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.382974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.383007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.383291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.383327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.383506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.383538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.383649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.383680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.383891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.383923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.384209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.384249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.384514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.384546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.384727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.384757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.384941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.384973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.385104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.385133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.385365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.385397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.385652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.385683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.385863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.385893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.386070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.386101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.386296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.386326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.386602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.386633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.386751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.386780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.386955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.386984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.387155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.387186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.387372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.387403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.387520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.387549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.387660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.387689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.387962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.387992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.388098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.388136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.388330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.388360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.388534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.388564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.388760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.388792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.389032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.389063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.389250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.389282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.389416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.389446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.389709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.389739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.389929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.389958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.390162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.390194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.390341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.390372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.390588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.390618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.390857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.390887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.391090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.956 [2024-07-13 01:00:59.391120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.956 qpair failed and we were unable to recover it. 00:35:47.956 [2024-07-13 01:00:59.391393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.391425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.391665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.391695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.391818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.391848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.391965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.391994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.392170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.392199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.392377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.392441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.392685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.392732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.393013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.393044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.393284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.393316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.393508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.393538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.393727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.393757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.393932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.393962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.394101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.394131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.394323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.394355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.394556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.394587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.394848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.394878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.395119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.395149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.395281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.395312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.395447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.395477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.395714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.395745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.395933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.395963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.396147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.396177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.396368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.396399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.396566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.396596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.396768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.396799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.396982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.397011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.397134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.397169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.397302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.397335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.397459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.397489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.397642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.397673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.397855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.397885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.398088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.398118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.398309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.398340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.398527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.398558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.398738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.398769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.398963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.398992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.399143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.399173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.399360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.399391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.399576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.399609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.399818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.957 [2024-07-13 01:00:59.399850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.957 qpair failed and we were unable to recover it. 00:35:47.957 [2024-07-13 01:00:59.400073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.400103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.400300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.400332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.400453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.400483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.400652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.400682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.400917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.400948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.401163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.401194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.401375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.401406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.401599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.401630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.401868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.401900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.402184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.402217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.402473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.402505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.402746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.402778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.402963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.402994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.403171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:47.958 [2024-07-13 01:00:59.403204] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:47.958 [2024-07-13 01:00:59.403212] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:47.958 [2024-07-13 01:00:59.403219] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:47.958 [2024-07-13 01:00:59.403214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.403254] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:47.958 [2024-07-13 01:00:59.403261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.403451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.403480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.403698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.403726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.403932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.403858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:35:47.958 [2024-07-13 01:00:59.403968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.403947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:35:47.958 [2024-07-13 01:00:59.404028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:35:47.958 [2024-07-13 01:00:59.404030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:35:47.958 [2024-07-13 01:00:59.404207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.404248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.404373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.404401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.404593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.404622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.404835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.404867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.404987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.405018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.405189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.405219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.405493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.405538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.405789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.405821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.406010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.406040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.406305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.406337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.406458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.406488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.406760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.406789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.406974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.407004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.407137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.407167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.407312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.407343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.407570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.407601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.407772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.407802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.408041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.408071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.408286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.408318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.408493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.408523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.958 qpair failed and we were unable to recover it. 00:35:47.958 [2024-07-13 01:00:59.408720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.958 [2024-07-13 01:00:59.408750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.408945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.408975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.409159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.409188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.409462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.409493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.409751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.409782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.410026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.410056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.410174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.410204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.410408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.410439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.410650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.410680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.410871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.410902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.411092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.411122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.411242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.411272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.411489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.411520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.411714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.411751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.412023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.412054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.412291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.412323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.412585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.412615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.412919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.412949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.413187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.413216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.413535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.413566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.413777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.413807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.414070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.414101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.414343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.414373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.414559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.414589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.414877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.414907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.415175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.415204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.415500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.415531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.415818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.415850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.415979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.416009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.416200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.416248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.416439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.416470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.416597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.416627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.416798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.416828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.417114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.417143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.417270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.417301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.417492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.417522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.959 [2024-07-13 01:00:59.417716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.959 [2024-07-13 01:00:59.417746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.959 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.417947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.417977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.418243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.418275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.418480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.418510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.418737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.418774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.419027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.419056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.419242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.419275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.419489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.419519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.419775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.419806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.420038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.420070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.420289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.420320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.420528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.420558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.420735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.420765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.420896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.420927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.421151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.421182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.421430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.421463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.421734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.421766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.422004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.422036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.422215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.422256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.422543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.422574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.422826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.422858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.423098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.423130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.423371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.423403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.423526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.423556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.423750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.423780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.424068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.424100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.424368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.424401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.424657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.424688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.424860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.424890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.425103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.425133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.425324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.425356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.425558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.425587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.425733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.425763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.426027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.426058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.426244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.426275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.426394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.426424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.426675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.426706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.426982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.427013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.427217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.427259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.427443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.427472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.427651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.427681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.960 qpair failed and we were unable to recover it. 00:35:47.960 [2024-07-13 01:00:59.427891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.960 [2024-07-13 01:00:59.427922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.428118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.428147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.428358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.428389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.428674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.428704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.428934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.428991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.429299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.429332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.429598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.429629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.429818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.429848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.430116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.430146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.430390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.430420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.430683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.430713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.430897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.430927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.431189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.431218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.431490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.431521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.431807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.431837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.432052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.432082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.432271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.432301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.432489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.432529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.432771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.432799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.432981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.433011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.433273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.433305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.433513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.433543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.433792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.433825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.434033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.434064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.434242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.434274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.434536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.434570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.434763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.434795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.435067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.435100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.435344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.435379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.435637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.435671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.435904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.435936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.436138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.436169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.436285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.436316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.436579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.436612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.436878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.436910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.437100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.437128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.437321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.437351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.437613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.437642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.437928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.437958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.438214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.438252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.438440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.961 [2024-07-13 01:00:59.438469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.961 qpair failed and we were unable to recover it. 00:35:47.961 [2024-07-13 01:00:59.438706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.438735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.438906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.438935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.439198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.439234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.439476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.439533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.439753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.439783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.440019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.440048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.440235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.440267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.440460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.440490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.440748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.440777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.440913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.440943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.441187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.441216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.441468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.441498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.441739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.441769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.441895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.441924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.442184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.442214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.442512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.442542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.442743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.442781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.442991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.443020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.443258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.443289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.443530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.443560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.443798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.443828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.444074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.444103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.444312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.444343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.444518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.444547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.444732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.444761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.445023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.445052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.445292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.445323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.445555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.445584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.445762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.445791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.446002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.446032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.446299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.446329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.446577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.446606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.446846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.446875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.447137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.447167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.447367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.447399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.447521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.447551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.447734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.447765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.448031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.448064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.448359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.448393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.448638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.448673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.962 [2024-07-13 01:00:59.448983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.962 [2024-07-13 01:00:59.449018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.962 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.449293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.449326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.449530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.449561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.449815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.449874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.450116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.450147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.450321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.450353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.450617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.450646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.450786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.450815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.451078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.451108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.451281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.451311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.451524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.451553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.451760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.451789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.452041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.452071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.452314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.452345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.452563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.452592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.452834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.452863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.453101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.453140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.453386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.453417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.453679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.453709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.453886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.453916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.454053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.454082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.454278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.454309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.454622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.454655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.454834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.454866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.455039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.455069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.455383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.455417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.455643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.455675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.455938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.455970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.456263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.456298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.456513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.456545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.456821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.456853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.457071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.457103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.457347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.457380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.457512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.457542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.457808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.457839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.458023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.963 [2024-07-13 01:00:59.458052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.963 qpair failed and we were unable to recover it. 00:35:47.963 [2024-07-13 01:00:59.458326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.458356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.458640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.458670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.458952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.458981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.459244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.459274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.459405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.459434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.459693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.459723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.459895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.459925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.460210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.460260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.460526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.460555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.460677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.460707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.460946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.460975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.461286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.461316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.461592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.461621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.461746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.461775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.462036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.462065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.462260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.462290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.462535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.462566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.462789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.462818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.462936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.462965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.463232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.463263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.463440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.463470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.463663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.463692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.463879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.463909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.464110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.464139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.464418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.464448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.464625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.464654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.464865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.464895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.465155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.465185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.465368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.465399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.465573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.465602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.465866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.465896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.466028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.466058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.466349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.466380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.466620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.466650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.466795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.466825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.467086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.467116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.467317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.467347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.467530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.467560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.467749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.467778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.468051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.468081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.964 [2024-07-13 01:00:59.468321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.964 [2024-07-13 01:00:59.468351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.964 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.468619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.468648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.468837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.468867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.469049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.469078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.469360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.469391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.469661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.469691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.469959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.469988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.470201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.470246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.470501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.470531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.470772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.470802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.470973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.471003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.471261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.471291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.471474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.471503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.471677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.471707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.471889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.471918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.472121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.472150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.472396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.472426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.472685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.472714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.472897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.472927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.473138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.473168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.473406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.473437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.473685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.473721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.474027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.474057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.474245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.474274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.474392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.474422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.474656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.474685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.474951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.474981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.475196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.475235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.475477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.475506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.475724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.475753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.476019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.476049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.476337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.476367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:47.965 [2024-07-13 01:00:59.476579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.965 [2024-07-13 01:00:59.476609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:47.965 qpair failed and we were unable to recover it. 00:35:48.231 [2024-07-13 01:00:59.476869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.231 [2024-07-13 01:00:59.476899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.231 qpair failed and we were unable to recover it. 00:35:48.231 [2024-07-13 01:00:59.477090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.231 [2024-07-13 01:00:59.477120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.231 qpair failed and we were unable to recover it. 00:35:48.231 [2024-07-13 01:00:59.477306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.231 [2024-07-13 01:00:59.477336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.231 qpair failed and we were unable to recover it. 00:35:48.231 [2024-07-13 01:00:59.477598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.231 [2024-07-13 01:00:59.477628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.231 qpair failed and we were unable to recover it. 00:35:48.231 [2024-07-13 01:00:59.477813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.477843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.478013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.478042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.478307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.478337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.478525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.478554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.478878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.478907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.479026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.479056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.479270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.479301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.479511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.479541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.479723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.479753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.479894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.479924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.480183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.480218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.480407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.480437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.480575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.480604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.480779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.480809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.481071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.481101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.481307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.481338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.481511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.481541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.481721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.481751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.482011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.482040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.482276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.482307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.482546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.482575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.482743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.482773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.483047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.483076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.483262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.483293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.483584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.483613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.483909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.483939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.484107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.484137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.484399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.484428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.484718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.484748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.484997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.485026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.485333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.485363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.485627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.485657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.485957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.485986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.486186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.486215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.486492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.486522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.486717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.486746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.487001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.487031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.487300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.487331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.487582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.487611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.487818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.232 [2024-07-13 01:00:59.487847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.232 qpair failed and we were unable to recover it. 00:35:48.232 [2024-07-13 01:00:59.488110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.488140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.488314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.488345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.488584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.488614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.488821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.488851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.489043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.489072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.489336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.489366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.489606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.489636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.489823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.489852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.489981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.490011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.490118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.490147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.490342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.490377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.490628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.490657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.490914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.490943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.491136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.491165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.491335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.491365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.491546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.491575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.491771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.491800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.492048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.492078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.492341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.492371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.492555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.492584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.492850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.492880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.493076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.493105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.493387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.493417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.493680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.493710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.493965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.493994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.494281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.494311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.494590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.494620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.494755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.494784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.494955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.494984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.495242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.495274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.495587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.495617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.495883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.495912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.496194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.496223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.496479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.496509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.496772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.496802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.497001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.497031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.497204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.497258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.497457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.497487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.497725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.497756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.497943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.497973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.233 [2024-07-13 01:00:59.498173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.233 [2024-07-13 01:00:59.498203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.233 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.498478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.498509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.498753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.498783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.499022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.499053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.499294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.499324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.499478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.499508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.499745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.499775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.500041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.500070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.500192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.500221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.500356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.500386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.500625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.500659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.500834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.500863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.501148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.501178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.501359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.501388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.501600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.501628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.501897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.501927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.502179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.502208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.502503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.502533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.502721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.502751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.502940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.502969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.503163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.503192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.503398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.503429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.503620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.503650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.503769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.503799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.503988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.504018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.504255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.504287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.504459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.504488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.504614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.504643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.504852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.504882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.505070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.505100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.505292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.505322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.505434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.505464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.505662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.505692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.505831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.505861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.505976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.506006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.506156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.506186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.506386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.506416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.506621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.506652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.506934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.506964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.507251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.507281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.234 [2024-07-13 01:00:59.507560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.234 [2024-07-13 01:00:59.507589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.234 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.507779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.507809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.508095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.508124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.508389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.508419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.508616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.508645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.508880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.508909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.509088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.509118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.509327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.509358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.509545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.509575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.509747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.509777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.509998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.510038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.510299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.510329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.510527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.510556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.510768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.510797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.511048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.511077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.511308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.511339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.511478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.511507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.511691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.511720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.511914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.511943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.512115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.512144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.512281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.512311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.512574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.512604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.512810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.512840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.513034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.513063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.513316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.513347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.513462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.513491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.513672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.513702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.513885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.513914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.514054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.514084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.514272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.514302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.514496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.514526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.514823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.514852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:48.235 [2024-07-13 01:00:59.515138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.515176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.515377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.515406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:35:48.235 [2024-07-13 01:00:59.515598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.515631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.515813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.515843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:48.235 [2024-07-13 01:00:59.516053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.516083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:48.235 [2024-07-13 01:00:59.516271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 [2024-07-13 01:00:59.516304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.516488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.235 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:48.235 [2024-07-13 01:00:59.516518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.235 qpair failed and we were unable to recover it. 00:35:48.235 [2024-07-13 01:00:59.516706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.516736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.516998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.517028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.517205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.517259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.517387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.517417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.517658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.517689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.517803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.517832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.518094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.518124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.518399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.518429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.518669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.518699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.518837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.518872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.519049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.519079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.519203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.519241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.519430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.519459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.519719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.519749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.519880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.519909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.520148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.520178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.520418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.520448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.520655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.520685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.520880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.520909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.521196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.521235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.521361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.521393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.521538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.521567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.521751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.521780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.521907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.521938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.522066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.522095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.522291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.522323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.522446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.522476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.522658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.522688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.522925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.522954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.523078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.523108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.523294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.523324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.523511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.523541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.523655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.236 [2024-07-13 01:00:59.523684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.236 qpair failed and we were unable to recover it. 00:35:48.236 [2024-07-13 01:00:59.523817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.523849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.524038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.524067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.524257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.524288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.524502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.524551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.524672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.524702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.524821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.524850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.525087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.525116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.525247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.525278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.525408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.525439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.525558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.525586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.525761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.525790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.526033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.526064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.526184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.526214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.526353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.526383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.526622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.526651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.526785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.526815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.527064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.527100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.527288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.527318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.527508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.527538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.527728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.527757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.527981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.528012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.528139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.528169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.528381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.528411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.528602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.528632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.528754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.528783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.528963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.528992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.529244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.529274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.529537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.529567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.529692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.529729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.529857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.529886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.530019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.530049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.530323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.530354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.530503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.530532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.530746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.530775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.530896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.530924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.531124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.531159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.531416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.531448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.531719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.531748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.531875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.531904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.532059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.532093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.237 [2024-07-13 01:00:59.532289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.237 [2024-07-13 01:00:59.532320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.237 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.532586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.532615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.532744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.532773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.532992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.533032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.533175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.533206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.533417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.533448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.533687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.533716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.533995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.534024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.534320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.534352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.534481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.534511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.534631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.534662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.534874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.534904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.535076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.535106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.535304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.535335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.535532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.535561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.535735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.535765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.536004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.536040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.536353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.536383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.536578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.536608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.536921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.536951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.537131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.537160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.537355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.537386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.537522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.537551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.537673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.537702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.537970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.538000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.538182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.538212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.538420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.538450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.538587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.538618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.538798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.538828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.539058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.539088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.539220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.539261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.539421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.539453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.539600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.539630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.539818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.539847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.540125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.540155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.540448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.540479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.540752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.540782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.541078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.541107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.541286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.541316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.541441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.541471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.541603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.238 [2024-07-13 01:00:59.541632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.238 qpair failed and we were unable to recover it. 00:35:48.238 [2024-07-13 01:00:59.541762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.541791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.541941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.541970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.542107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.542159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.542479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.542514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.542682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.542712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.542862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.542892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.543152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.543181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.543402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.543433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.543699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.543729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.543878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.543908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.544079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.544109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.544329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.544360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.544566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.544598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.544739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.544768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.545047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.545076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.545212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.545255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.545389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.545418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.545561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.545594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.545842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.545873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.546042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.546071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.546204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.546243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.546395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.546425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.546616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.546644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.546845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.546875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.547091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.547122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.547341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.547372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.547585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.547614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.547749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.547778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.547900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.547929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.548135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.548164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.548338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.548369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.548614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.548644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.548831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.548860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.549098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.549127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:48.239 [2024-07-13 01:00:59.549320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.549352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:48.239 [2024-07-13 01:00:59.549653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.549685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 [2024-07-13 01:00:59.549895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.549927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.239 [2024-07-13 01:00:59.550175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.550207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.239 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:48.239 [2024-07-13 01:00:59.550395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.239 [2024-07-13 01:00:59.550425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.239 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.550567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.550596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.550780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.550814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.550982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.551011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.551182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.551212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.551375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.551406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.551622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.551652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.551882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.551911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.552088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.552117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.552320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.552350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.552597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.552626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.552817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.552846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.552986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.553015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.553203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.553243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.553442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.553471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.553652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.553681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.553822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.553851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.554031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.554060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.554311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.554340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.554460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.554489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.554672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.554701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.554983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.555011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.555254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.555284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.555417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.555446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.555582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.555611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.555868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.555897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.556081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.556110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.556311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.556341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.556533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.556562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa958000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.556833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.556872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.557090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.557125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.557377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.557408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.557564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.557594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.557720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.557749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.557929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.557957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.240 [2024-07-13 01:00:59.558149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.240 [2024-07-13 01:00:59.558178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.240 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.558358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.558389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.558526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.558556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.558741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.558771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.559010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.559038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.559318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.559348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.559611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.559641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.559951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.559986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.560192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.560221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.560445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.560476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.560713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.560742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.560946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.560976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.561244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.561276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.561461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.561491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.561697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.561727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.561916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.561945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.562207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.562245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.562377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.562407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.562579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.562608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.562911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.562943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.563210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.563251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.563512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.563542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.563864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.563895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.564153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.564184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.564462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.564493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.564730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.564760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.565025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.565055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.565184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.565213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.565426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.565459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.565723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.565754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.565888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.565917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.566167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.566197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.566393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.566424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.566619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.566648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa960000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.566921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.566964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.567207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.567252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.567387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.567417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.567536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.567565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.567747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.567777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.568053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 Malloc0 00:35:48.241 [2024-07-13 01:00:59.568083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.568237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.568267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.241 qpair failed and we were unable to recover it. 00:35:48.241 [2024-07-13 01:00:59.568418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.241 [2024-07-13 01:00:59.568448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.568628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.568658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.242 [2024-07-13 01:00:59.568843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.568874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.569048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.569078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:48.242 [2024-07-13 01:00:59.569351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.569383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.242 [2024-07-13 01:00:59.569671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.569701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.569980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.570010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.570248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.570278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.570479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.570509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.570684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.570713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.570940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.570969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.571207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.571246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.571434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.571465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.571703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.571732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.571874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.571903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.572032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.572062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.572360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.572390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.572630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.572660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa968000b90 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.572877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.572932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.573121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.573154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.573421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.573452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.573654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.573683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.573829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.573859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.574045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.574075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.574200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.574241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.574443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.574473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.574611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.574641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.574772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.574802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.575064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.575093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.575349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.575379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.575580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.575610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.575691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:48.242 [2024-07-13 01:00:59.575792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.575830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.576094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.576123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.576365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.576395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.576664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.576693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.576907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.576936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.577174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.577202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.577399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.577430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.577560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.577589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.242 qpair failed and we were unable to recover it. 00:35:48.242 [2024-07-13 01:00:59.577772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.242 [2024-07-13 01:00:59.577802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.578023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.578052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.578219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.578260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.578402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.578432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.578625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.578654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.578913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.578942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.579128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.579158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.579369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.579399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.579587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.579616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.579740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.579769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.580028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.580057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.580362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.580392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.580617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.580645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.580817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.580846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.581045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.581074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.581261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.581291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.581481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.581512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.581725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.581753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.581956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.581985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.582180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.582215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.582485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.582515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.582702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.582732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.582994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.583024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.583262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.583293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.583513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.583543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.243 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:48.243 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.243 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:48.243 [2024-07-13 01:00:59.585453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.585501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.585796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.585832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.586045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.586076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.586218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.586257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.586395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.586425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.586635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.586665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.586809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.586846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.587173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.587203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.587453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.587483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.587665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.587695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.587897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.587927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.588117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.588146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.588385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.588417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.588612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.243 [2024-07-13 01:00:59.588641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.243 qpair failed and we were unable to recover it. 00:35:48.243 [2024-07-13 01:00:59.588774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.588804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.589032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.589062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.589262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.589292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.589554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.589584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.589758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.589789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.590056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.590085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.590303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.590334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.590471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.590501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.590767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.590797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.590989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.591018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.591207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.591246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.591451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.591480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.591611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.591641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.244 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:48.244 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.244 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:48.244 [2024-07-13 01:00:59.593404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.593452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.593694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.593728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.594018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.594048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.594356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.594388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.594515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.594545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.594791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.594821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.595008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.595038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.595206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.595245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.595466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.595497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.595749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.595778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.595917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.595946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.596146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.596175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.596405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.596436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.596622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.596653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.596842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.596871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.597114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.597144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.597318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.597349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.597533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.597562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.597703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.597736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.598050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.598080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.598325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.244 [2024-07-13 01:00:59.598356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.244 qpair failed and we were unable to recover it. 00:35:48.244 [2024-07-13 01:00:59.598484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.598513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.598686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.598716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.598900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.598929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.599111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.599140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.599341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.599370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.599563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.599593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.245 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:48.245 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.245 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:48.245 [2024-07-13 01:00:59.601470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.601517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.601759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.601792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.602049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.602080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.602347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.602381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.602593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.602624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.602840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.602870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.603114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.603143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.603348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.603378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.603621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.603650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.603898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.245 [2024-07-13 01:00:59.603921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.245 [2024-07-13 01:00:59.603949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1321b60 with addr=10.0.0.2, port=4420 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.606264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.245 [2024-07-13 01:00:59.606384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.245 [2024-07-13 01:00:59.606427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.245 [2024-07-13 01:00:59.606450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.245 [2024-07-13 01:00:59.606469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.245 [2024-07-13 01:00:59.606515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.245 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:48.245 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.245 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:48.245 [2024-07-13 01:00:59.616198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.245 [2024-07-13 01:00:59.616321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.245 [2024-07-13 01:00:59.616353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.245 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.245 [2024-07-13 01:00:59.616376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.245 [2024-07-13 01:00:59.616391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.245 [2024-07-13 01:00:59.616424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 01:00:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1619988 00:35:48.245 [2024-07-13 01:00:59.626166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.245 [2024-07-13 01:00:59.626247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.245 [2024-07-13 01:00:59.626269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.245 [2024-07-13 01:00:59.626279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.245 [2024-07-13 01:00:59.626289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.245 [2024-07-13 01:00:59.626311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.636178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.245 [2024-07-13 01:00:59.636247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.245 [2024-07-13 01:00:59.636263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.245 [2024-07-13 01:00:59.636270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.245 [2024-07-13 01:00:59.636276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.245 [2024-07-13 01:00:59.636291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.646202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.245 [2024-07-13 01:00:59.646266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.245 [2024-07-13 01:00:59.646281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.245 [2024-07-13 01:00:59.646287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.245 [2024-07-13 01:00:59.646293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.245 [2024-07-13 01:00:59.646307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.656256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.245 [2024-07-13 01:00:59.656308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.245 [2024-07-13 01:00:59.656323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.245 [2024-07-13 01:00:59.656330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.245 [2024-07-13 01:00:59.656341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.245 [2024-07-13 01:00:59.656355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.666286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.245 [2024-07-13 01:00:59.666345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.245 [2024-07-13 01:00:59.666361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.245 [2024-07-13 01:00:59.666367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.245 [2024-07-13 01:00:59.666373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.245 [2024-07-13 01:00:59.666387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.245 qpair failed and we were unable to recover it. 00:35:48.245 [2024-07-13 01:00:59.676218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.245 [2024-07-13 01:00:59.676283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.246 [2024-07-13 01:00:59.676298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.246 [2024-07-13 01:00:59.676305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.246 [2024-07-13 01:00:59.676311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.246 [2024-07-13 01:00:59.676324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.246 qpair failed and we were unable to recover it. 00:35:48.246 [2024-07-13 01:00:59.686313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.246 [2024-07-13 01:00:59.686368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.246 [2024-07-13 01:00:59.686385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.246 [2024-07-13 01:00:59.686393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.246 [2024-07-13 01:00:59.686399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.246 [2024-07-13 01:00:59.686413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.246 qpair failed and we were unable to recover it. 00:35:48.246 [2024-07-13 01:00:59.696323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.246 [2024-07-13 01:00:59.696378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.246 [2024-07-13 01:00:59.696393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.246 [2024-07-13 01:00:59.696400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.246 [2024-07-13 01:00:59.696406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.246 [2024-07-13 01:00:59.696420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.246 qpair failed and we were unable to recover it. 00:35:48.246 [2024-07-13 01:00:59.706350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.246 [2024-07-13 01:00:59.706436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.246 [2024-07-13 01:00:59.706451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.246 [2024-07-13 01:00:59.706457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.246 [2024-07-13 01:00:59.706463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.246 [2024-07-13 01:00:59.706476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.246 qpair failed and we were unable to recover it. 00:35:48.246 [2024-07-13 01:00:59.716396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.246 [2024-07-13 01:00:59.716452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.246 [2024-07-13 01:00:59.716466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.246 [2024-07-13 01:00:59.716473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.246 [2024-07-13 01:00:59.716479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.246 [2024-07-13 01:00:59.716492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.246 qpair failed and we were unable to recover it. 00:35:48.246 [2024-07-13 01:00:59.726430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.246 [2024-07-13 01:00:59.726498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.246 [2024-07-13 01:00:59.726513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.246 [2024-07-13 01:00:59.726520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.246 [2024-07-13 01:00:59.726525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.246 [2024-07-13 01:00:59.726539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.246 qpair failed and we were unable to recover it. 00:35:48.246 [2024-07-13 01:00:59.736471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.246 [2024-07-13 01:00:59.736525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.246 [2024-07-13 01:00:59.736539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.246 [2024-07-13 01:00:59.736546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.246 [2024-07-13 01:00:59.736552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.246 [2024-07-13 01:00:59.736565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.246 qpair failed and we were unable to recover it. 00:35:48.246 [2024-07-13 01:00:59.746473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.246 [2024-07-13 01:00:59.746571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.246 [2024-07-13 01:00:59.746585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.246 [2024-07-13 01:00:59.746592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.246 [2024-07-13 01:00:59.746600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.246 [2024-07-13 01:00:59.746614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.246 qpair failed and we were unable to recover it. 00:35:48.246 [2024-07-13 01:00:59.756503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.246 [2024-07-13 01:00:59.756564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.246 [2024-07-13 01:00:59.756579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.246 [2024-07-13 01:00:59.756586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.246 [2024-07-13 01:00:59.756592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.246 [2024-07-13 01:00:59.756606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.246 qpair failed and we were unable to recover it. 00:35:48.246 [2024-07-13 01:00:59.766570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.246 [2024-07-13 01:00:59.766628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.246 [2024-07-13 01:00:59.766642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.246 [2024-07-13 01:00:59.766649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.246 [2024-07-13 01:00:59.766654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.246 [2024-07-13 01:00:59.766667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.246 qpair failed and we were unable to recover it. 00:35:48.246 [2024-07-13 01:00:59.776587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.246 [2024-07-13 01:00:59.776640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.246 [2024-07-13 01:00:59.776655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.246 [2024-07-13 01:00:59.776661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.246 [2024-07-13 01:00:59.776667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.246 [2024-07-13 01:00:59.776680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.246 qpair failed and we were unable to recover it. 00:35:48.507 [2024-07-13 01:00:59.786623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.507 [2024-07-13 01:00:59.786686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.507 [2024-07-13 01:00:59.786701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.507 [2024-07-13 01:00:59.786708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.507 [2024-07-13 01:00:59.786714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.507 [2024-07-13 01:00:59.786727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.507 qpair failed and we were unable to recover it. 00:35:48.507 [2024-07-13 01:00:59.796688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.507 [2024-07-13 01:00:59.796774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.507 [2024-07-13 01:00:59.796789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.507 [2024-07-13 01:00:59.796796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.507 [2024-07-13 01:00:59.796801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.507 [2024-07-13 01:00:59.796815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.507 qpair failed and we were unable to recover it. 00:35:48.507 [2024-07-13 01:00:59.806695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.507 [2024-07-13 01:00:59.806775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.507 [2024-07-13 01:00:59.806790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.507 [2024-07-13 01:00:59.806796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.507 [2024-07-13 01:00:59.806802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.507 [2024-07-13 01:00:59.806815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.507 qpair failed and we were unable to recover it. 00:35:48.507 [2024-07-13 01:00:59.816690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.507 [2024-07-13 01:00:59.816742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.507 [2024-07-13 01:00:59.816758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.508 [2024-07-13 01:00:59.816764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.508 [2024-07-13 01:00:59.816770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.508 [2024-07-13 01:00:59.816784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.508 qpair failed and we were unable to recover it. 00:35:48.508 [2024-07-13 01:00:59.826769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.508 [2024-07-13 01:00:59.826836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.508 [2024-07-13 01:00:59.826851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.508 [2024-07-13 01:00:59.826857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.508 [2024-07-13 01:00:59.826863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.508 [2024-07-13 01:00:59.826876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.508 qpair failed and we were unable to recover it. 00:35:48.508 [2024-07-13 01:00:59.836748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.508 [2024-07-13 01:00:59.836852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.508 [2024-07-13 01:00:59.836866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.508 [2024-07-13 01:00:59.836876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.508 [2024-07-13 01:00:59.836882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.508 [2024-07-13 01:00:59.836895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.508 qpair failed and we were unable to recover it. 00:35:48.508 [2024-07-13 01:00:59.846777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.508 [2024-07-13 01:00:59.846830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.508 [2024-07-13 01:00:59.846844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.508 [2024-07-13 01:00:59.846850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.508 [2024-07-13 01:00:59.846856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.508 [2024-07-13 01:00:59.846869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.508 qpair failed and we were unable to recover it. 00:35:48.508 [2024-07-13 01:00:59.856812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.508 [2024-07-13 01:00:59.856874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.508 [2024-07-13 01:00:59.856889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.508 [2024-07-13 01:00:59.856895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.508 [2024-07-13 01:00:59.856901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.508 [2024-07-13 01:00:59.856915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.508 qpair failed and we were unable to recover it. 00:35:48.508 [2024-07-13 01:00:59.866843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.508 [2024-07-13 01:00:59.866896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.508 [2024-07-13 01:00:59.866911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.508 [2024-07-13 01:00:59.866917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.508 [2024-07-13 01:00:59.866923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.508 [2024-07-13 01:00:59.866937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.508 qpair failed and we were unable to recover it. 00:35:48.508 [2024-07-13 01:00:59.876881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.508 [2024-07-13 01:00:59.876935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.508 [2024-07-13 01:00:59.876950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.508 [2024-07-13 01:00:59.876956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.508 [2024-07-13 01:00:59.876962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.508 [2024-07-13 01:00:59.876975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.508 qpair failed and we were unable to recover it. 00:35:48.508 [2024-07-13 01:00:59.886888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.508 [2024-07-13 01:00:59.886940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.508 [2024-07-13 01:00:59.886954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.508 [2024-07-13 01:00:59.886960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.508 [2024-07-13 01:00:59.886966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.508 [2024-07-13 01:00:59.886979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.508 qpair failed and we were unable to recover it. 00:35:48.508 [2024-07-13 01:00:59.896928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.508 [2024-07-13 01:00:59.897025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.508 [2024-07-13 01:00:59.897040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.508 [2024-07-13 01:00:59.897046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.508 [2024-07-13 01:00:59.897052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.508 [2024-07-13 01:00:59.897065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.508 qpair failed and we were unable to recover it. 00:35:48.508 [2024-07-13 01:00:59.907013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.508 [2024-07-13 01:00:59.907068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.508 [2024-07-13 01:00:59.907083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.508 [2024-07-13 01:00:59.907089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.508 [2024-07-13 01:00:59.907095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.508 [2024-07-13 01:00:59.907108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.508 qpair failed and we were unable to recover it. 00:35:48.508 [2024-07-13 01:00:59.916966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.508 [2024-07-13 01:00:59.917023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.508 [2024-07-13 01:00:59.917037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.508 [2024-07-13 01:00:59.917044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.508 [2024-07-13 01:00:59.917050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.508 [2024-07-13 01:00:59.917064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.508 qpair failed and we were unable to recover it. 00:35:48.508 [2024-07-13 01:00:59.926972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.508 [2024-07-13 01:00:59.927026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.508 [2024-07-13 01:00:59.927040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.508 [2024-07-13 01:00:59.927050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.508 [2024-07-13 01:00:59.927055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.508 [2024-07-13 01:00:59.927069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.508 qpair failed and we were unable to recover it. 00:35:48.508 [2024-07-13 01:00:59.937004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.508 [2024-07-13 01:00:59.937060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.508 [2024-07-13 01:00:59.937075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.508 [2024-07-13 01:00:59.937082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.508 [2024-07-13 01:00:59.937088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.508 [2024-07-13 01:00:59.937101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.508 qpair failed and we were unable to recover it. 00:35:48.508 [2024-07-13 01:00:59.947038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.508 [2024-07-13 01:00:59.947124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.508 [2024-07-13 01:00:59.947138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.508 [2024-07-13 01:00:59.947145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.508 [2024-07-13 01:00:59.947151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.508 [2024-07-13 01:00:59.947164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.508 qpair failed and we were unable to recover it. 00:35:48.508 [2024-07-13 01:00:59.956995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.508 [2024-07-13 01:00:59.957076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.508 [2024-07-13 01:00:59.957090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.508 [2024-07-13 01:00:59.957096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.508 [2024-07-13 01:00:59.957102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.508 [2024-07-13 01:00:59.957115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.509 qpair failed and we were unable to recover it. 00:35:48.509 [2024-07-13 01:00:59.967022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.509 [2024-07-13 01:00:59.967077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.509 [2024-07-13 01:00:59.967092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.509 [2024-07-13 01:00:59.967099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.509 [2024-07-13 01:00:59.967104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.509 [2024-07-13 01:00:59.967118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.509 qpair failed and we were unable to recover it. 00:35:48.509 [2024-07-13 01:00:59.977144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.509 [2024-07-13 01:00:59.977203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.509 [2024-07-13 01:00:59.977219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.509 [2024-07-13 01:00:59.977229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.509 [2024-07-13 01:00:59.977235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.509 [2024-07-13 01:00:59.977248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.509 qpair failed and we were unable to recover it. 00:35:48.509 [2024-07-13 01:00:59.987166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.509 [2024-07-13 01:00:59.987222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.509 [2024-07-13 01:00:59.987240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.509 [2024-07-13 01:00:59.987247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.509 [2024-07-13 01:00:59.987252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.509 [2024-07-13 01:00:59.987266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.509 qpair failed and we were unable to recover it. 00:35:48.509 [2024-07-13 01:00:59.997208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.509 [2024-07-13 01:00:59.997280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.509 [2024-07-13 01:00:59.997295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.509 [2024-07-13 01:00:59.997303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.509 [2024-07-13 01:00:59.997309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.509 [2024-07-13 01:00:59.997323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.509 qpair failed and we were unable to recover it. 00:35:48.509 [2024-07-13 01:01:00.007282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.509 [2024-07-13 01:01:00.007398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.509 [2024-07-13 01:01:00.007421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.509 [2024-07-13 01:01:00.007431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.509 [2024-07-13 01:01:00.007439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.509 [2024-07-13 01:01:00.007476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.509 qpair failed and we were unable to recover it. 00:35:48.509 [2024-07-13 01:01:00.017240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.509 [2024-07-13 01:01:00.017347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.509 [2024-07-13 01:01:00.017365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.509 [2024-07-13 01:01:00.017376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.509 [2024-07-13 01:01:00.017382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.509 [2024-07-13 01:01:00.017398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.509 qpair failed and we were unable to recover it. 00:35:48.509 [2024-07-13 01:01:00.027323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.509 [2024-07-13 01:01:00.027432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.509 [2024-07-13 01:01:00.027448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.509 [2024-07-13 01:01:00.027456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.509 [2024-07-13 01:01:00.027462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.509 [2024-07-13 01:01:00.027477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.509 qpair failed and we were unable to recover it. 00:35:48.509 [2024-07-13 01:01:00.037325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.509 [2024-07-13 01:01:00.037394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.509 [2024-07-13 01:01:00.037409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.509 [2024-07-13 01:01:00.037415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.509 [2024-07-13 01:01:00.037421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.509 [2024-07-13 01:01:00.037435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.509 qpair failed and we were unable to recover it. 00:35:48.509 [2024-07-13 01:01:00.047343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.509 [2024-07-13 01:01:00.047403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.509 [2024-07-13 01:01:00.047418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.509 [2024-07-13 01:01:00.047425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.509 [2024-07-13 01:01:00.047431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.509 [2024-07-13 01:01:00.047444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.509 qpair failed and we were unable to recover it. 00:35:48.509 [2024-07-13 01:01:00.057373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.509 [2024-07-13 01:01:00.057436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.509 [2024-07-13 01:01:00.057451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.509 [2024-07-13 01:01:00.057458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.509 [2024-07-13 01:01:00.057463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.509 [2024-07-13 01:01:00.057478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.509 qpair failed and we were unable to recover it. 00:35:48.769 [2024-07-13 01:01:00.067427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.769 [2024-07-13 01:01:00.067495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.769 [2024-07-13 01:01:00.067513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-07-13 01:01:00.067520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.769 [2024-07-13 01:01:00.067527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.769 [2024-07-13 01:01:00.067542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.769 qpair failed and we were unable to recover it. 00:35:48.769 [2024-07-13 01:01:00.077416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.769 [2024-07-13 01:01:00.077476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.769 [2024-07-13 01:01:00.077492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-07-13 01:01:00.077498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.769 [2024-07-13 01:01:00.077504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.769 [2024-07-13 01:01:00.077518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.769 qpair failed and we were unable to recover it. 00:35:48.769 [2024-07-13 01:01:00.087400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.769 [2024-07-13 01:01:00.087463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.769 [2024-07-13 01:01:00.087478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-07-13 01:01:00.087485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.769 [2024-07-13 01:01:00.087491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.769 [2024-07-13 01:01:00.087505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.769 qpair failed and we were unable to recover it. 00:35:48.769 [2024-07-13 01:01:00.097500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.769 [2024-07-13 01:01:00.097557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.769 [2024-07-13 01:01:00.097572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-07-13 01:01:00.097578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.769 [2024-07-13 01:01:00.097584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.769 [2024-07-13 01:01:00.097597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.769 qpair failed and we were unable to recover it. 00:35:48.769 [2024-07-13 01:01:00.107443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.769 [2024-07-13 01:01:00.107496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.769 [2024-07-13 01:01:00.107515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.769 [2024-07-13 01:01:00.107521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.769 [2024-07-13 01:01:00.107527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.770 [2024-07-13 01:01:00.107540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-07-13 01:01:00.117533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-07-13 01:01:00.117595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-07-13 01:01:00.117610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-07-13 01:01:00.117616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-07-13 01:01:00.117622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.770 [2024-07-13 01:01:00.117635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-07-13 01:01:00.127577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-07-13 01:01:00.127627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-07-13 01:01:00.127642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-07-13 01:01:00.127648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-07-13 01:01:00.127654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.770 [2024-07-13 01:01:00.127667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-07-13 01:01:00.137520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-07-13 01:01:00.137570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-07-13 01:01:00.137584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-07-13 01:01:00.137591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-07-13 01:01:00.137597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.770 [2024-07-13 01:01:00.137610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-07-13 01:01:00.147546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-07-13 01:01:00.147610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-07-13 01:01:00.147624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-07-13 01:01:00.147631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-07-13 01:01:00.147637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.770 [2024-07-13 01:01:00.147653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-07-13 01:01:00.157585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-07-13 01:01:00.157642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-07-13 01:01:00.157657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-07-13 01:01:00.157664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-07-13 01:01:00.157669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.770 [2024-07-13 01:01:00.157683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-07-13 01:01:00.167707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-07-13 01:01:00.167759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-07-13 01:01:00.167773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-07-13 01:01:00.167780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-07-13 01:01:00.167786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.770 [2024-07-13 01:01:00.167799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-07-13 01:01:00.177666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-07-13 01:01:00.177719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-07-13 01:01:00.177734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-07-13 01:01:00.177740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-07-13 01:01:00.177746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.770 [2024-07-13 01:01:00.177759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-07-13 01:01:00.187657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-07-13 01:01:00.187712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-07-13 01:01:00.187726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-07-13 01:01:00.187733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-07-13 01:01:00.187739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.770 [2024-07-13 01:01:00.187752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-07-13 01:01:00.197769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-07-13 01:01:00.197823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-07-13 01:01:00.197841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-07-13 01:01:00.197848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-07-13 01:01:00.197853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.770 [2024-07-13 01:01:00.197866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-07-13 01:01:00.207783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-07-13 01:01:00.207837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-07-13 01:01:00.207852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-07-13 01:01:00.207859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-07-13 01:01:00.207865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.770 [2024-07-13 01:01:00.207878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-07-13 01:01:00.217848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-07-13 01:01:00.217905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-07-13 01:01:00.217920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-07-13 01:01:00.217926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-07-13 01:01:00.217932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.770 [2024-07-13 01:01:00.217945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-07-13 01:01:00.227835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-07-13 01:01:00.227889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-07-13 01:01:00.227904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-07-13 01:01:00.227911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-07-13 01:01:00.227916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.770 [2024-07-13 01:01:00.227930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-07-13 01:01:00.237821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-07-13 01:01:00.237877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-07-13 01:01:00.237892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-07-13 01:01:00.237899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-07-13 01:01:00.237905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.770 [2024-07-13 01:01:00.237922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.770 [2024-07-13 01:01:00.247912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.770 [2024-07-13 01:01:00.247980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.770 [2024-07-13 01:01:00.247995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.770 [2024-07-13 01:01:00.248001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.770 [2024-07-13 01:01:00.248007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.770 [2024-07-13 01:01:00.248022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.770 qpair failed and we were unable to recover it. 00:35:48.771 [2024-07-13 01:01:00.257976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.771 [2024-07-13 01:01:00.258033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.771 [2024-07-13 01:01:00.258048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.771 [2024-07-13 01:01:00.258055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.771 [2024-07-13 01:01:00.258061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.771 [2024-07-13 01:01:00.258074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.771 qpair failed and we were unable to recover it. 00:35:48.771 [2024-07-13 01:01:00.267897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.771 [2024-07-13 01:01:00.267947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.771 [2024-07-13 01:01:00.267962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.771 [2024-07-13 01:01:00.267969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.771 [2024-07-13 01:01:00.267975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.771 [2024-07-13 01:01:00.267988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.771 qpair failed and we were unable to recover it. 00:35:48.771 [2024-07-13 01:01:00.278042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.771 [2024-07-13 01:01:00.278100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.771 [2024-07-13 01:01:00.278114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.771 [2024-07-13 01:01:00.278121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.771 [2024-07-13 01:01:00.278127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.771 [2024-07-13 01:01:00.278140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.771 qpair failed and we were unable to recover it. 00:35:48.771 [2024-07-13 01:01:00.287956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.771 [2024-07-13 01:01:00.288012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.771 [2024-07-13 01:01:00.288030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.771 [2024-07-13 01:01:00.288037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.771 [2024-07-13 01:01:00.288042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.771 [2024-07-13 01:01:00.288056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.771 qpair failed and we were unable to recover it. 00:35:48.771 [2024-07-13 01:01:00.297993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.771 [2024-07-13 01:01:00.298046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.771 [2024-07-13 01:01:00.298061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.771 [2024-07-13 01:01:00.298067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.771 [2024-07-13 01:01:00.298073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.771 [2024-07-13 01:01:00.298086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.771 qpair failed and we were unable to recover it. 00:35:48.771 [2024-07-13 01:01:00.308135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.771 [2024-07-13 01:01:00.308193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.771 [2024-07-13 01:01:00.308209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.771 [2024-07-13 01:01:00.308216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.771 [2024-07-13 01:01:00.308221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.771 [2024-07-13 01:01:00.308239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.771 qpair failed and we were unable to recover it. 00:35:48.771 [2024-07-13 01:01:00.318100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:48.771 [2024-07-13 01:01:00.318159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:48.771 [2024-07-13 01:01:00.318174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:48.771 [2024-07-13 01:01:00.318181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:48.771 [2024-07-13 01:01:00.318187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:48.771 [2024-07-13 01:01:00.318200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:48.771 qpair failed and we were unable to recover it. 00:35:49.031 [2024-07-13 01:01:00.328170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.031 [2024-07-13 01:01:00.328231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.031 [2024-07-13 01:01:00.328247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.031 [2024-07-13 01:01:00.328253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.031 [2024-07-13 01:01:00.328259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.031 [2024-07-13 01:01:00.328276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.031 qpair failed and we were unable to recover it. 00:35:49.031 [2024-07-13 01:01:00.338182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.031 [2024-07-13 01:01:00.338241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.031 [2024-07-13 01:01:00.338256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.031 [2024-07-13 01:01:00.338263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.031 [2024-07-13 01:01:00.338269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.031 [2024-07-13 01:01:00.338282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.031 qpair failed and we were unable to recover it. 00:35:49.031 [2024-07-13 01:01:00.348151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.031 [2024-07-13 01:01:00.348206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.031 [2024-07-13 01:01:00.348219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.031 [2024-07-13 01:01:00.348232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.031 [2024-07-13 01:01:00.348238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.031 [2024-07-13 01:01:00.348251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.031 qpair failed and we were unable to recover it. 00:35:49.031 [2024-07-13 01:01:00.358234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.031 [2024-07-13 01:01:00.358290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.031 [2024-07-13 01:01:00.358304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.031 [2024-07-13 01:01:00.358311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.031 [2024-07-13 01:01:00.358317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.031 [2024-07-13 01:01:00.358330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.031 qpair failed and we were unable to recover it. 00:35:49.031 [2024-07-13 01:01:00.368194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.031 [2024-07-13 01:01:00.368270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.031 [2024-07-13 01:01:00.368285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.031 [2024-07-13 01:01:00.368291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.031 [2024-07-13 01:01:00.368297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.031 [2024-07-13 01:01:00.368311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.031 qpair failed and we were unable to recover it. 00:35:49.031 [2024-07-13 01:01:00.378302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.031 [2024-07-13 01:01:00.378355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.031 [2024-07-13 01:01:00.378375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.031 [2024-07-13 01:01:00.378382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.031 [2024-07-13 01:01:00.378387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.031 [2024-07-13 01:01:00.378401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.031 qpair failed and we were unable to recover it. 00:35:49.031 [2024-07-13 01:01:00.388243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.031 [2024-07-13 01:01:00.388296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.031 [2024-07-13 01:01:00.388310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.031 [2024-07-13 01:01:00.388317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.031 [2024-07-13 01:01:00.388323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.031 [2024-07-13 01:01:00.388335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.031 qpair failed and we were unable to recover it. 00:35:49.031 [2024-07-13 01:01:00.398319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.031 [2024-07-13 01:01:00.398378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.031 [2024-07-13 01:01:00.398393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.031 [2024-07-13 01:01:00.398399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.031 [2024-07-13 01:01:00.398405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.031 [2024-07-13 01:01:00.398418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.031 qpair failed and we were unable to recover it. 00:35:49.031 [2024-07-13 01:01:00.408364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.031 [2024-07-13 01:01:00.408423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.031 [2024-07-13 01:01:00.408438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.031 [2024-07-13 01:01:00.408444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.031 [2024-07-13 01:01:00.408449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.032 [2024-07-13 01:01:00.408464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-07-13 01:01:00.418562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-07-13 01:01:00.418626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-07-13 01:01:00.418640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-07-13 01:01:00.418646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-07-13 01:01:00.418655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.032 [2024-07-13 01:01:00.418668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-07-13 01:01:00.428491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-07-13 01:01:00.428549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-07-13 01:01:00.428562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-07-13 01:01:00.428569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-07-13 01:01:00.428574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.032 [2024-07-13 01:01:00.428588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-07-13 01:01:00.438508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-07-13 01:01:00.438584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-07-13 01:01:00.438599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-07-13 01:01:00.438605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-07-13 01:01:00.438610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.032 [2024-07-13 01:01:00.438623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-07-13 01:01:00.448536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-07-13 01:01:00.448588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-07-13 01:01:00.448603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-07-13 01:01:00.448609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-07-13 01:01:00.448614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.032 [2024-07-13 01:01:00.448627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-07-13 01:01:00.458528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-07-13 01:01:00.458586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-07-13 01:01:00.458600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-07-13 01:01:00.458606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-07-13 01:01:00.458612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.032 [2024-07-13 01:01:00.458625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-07-13 01:01:00.468538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-07-13 01:01:00.468596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-07-13 01:01:00.468610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-07-13 01:01:00.468617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-07-13 01:01:00.468622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.032 [2024-07-13 01:01:00.468635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-07-13 01:01:00.478521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-07-13 01:01:00.478605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-07-13 01:01:00.478619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-07-13 01:01:00.478626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-07-13 01:01:00.478631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.032 [2024-07-13 01:01:00.478644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-07-13 01:01:00.488590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-07-13 01:01:00.488646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-07-13 01:01:00.488660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-07-13 01:01:00.488667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-07-13 01:01:00.488673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.032 [2024-07-13 01:01:00.488686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-07-13 01:01:00.498662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-07-13 01:01:00.498717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-07-13 01:01:00.498732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-07-13 01:01:00.498738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-07-13 01:01:00.498744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.032 [2024-07-13 01:01:00.498757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-07-13 01:01:00.508711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-07-13 01:01:00.508796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-07-13 01:01:00.508810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-07-13 01:01:00.508817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-07-13 01:01:00.508825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.032 [2024-07-13 01:01:00.508838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-07-13 01:01:00.518625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-07-13 01:01:00.518680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-07-13 01:01:00.518695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-07-13 01:01:00.518702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-07-13 01:01:00.518708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.032 [2024-07-13 01:01:00.518721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-07-13 01:01:00.528701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-07-13 01:01:00.528761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-07-13 01:01:00.528776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-07-13 01:01:00.528782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-07-13 01:01:00.528789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.032 [2024-07-13 01:01:00.528802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-07-13 01:01:00.538780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-07-13 01:01:00.538835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-07-13 01:01:00.538850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-07-13 01:01:00.538856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-07-13 01:01:00.538862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.032 [2024-07-13 01:01:00.538876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-07-13 01:01:00.548779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.032 [2024-07-13 01:01:00.548831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.032 [2024-07-13 01:01:00.548845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.032 [2024-07-13 01:01:00.548852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.032 [2024-07-13 01:01:00.548858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.032 [2024-07-13 01:01:00.548871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.032 qpair failed and we were unable to recover it. 00:35:49.032 [2024-07-13 01:01:00.558807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-07-13 01:01:00.558884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-07-13 01:01:00.558898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-07-13 01:01:00.558904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-07-13 01:01:00.558910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.033 [2024-07-13 01:01:00.558924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.033 [2024-07-13 01:01:00.568836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-07-13 01:01:00.568895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-07-13 01:01:00.568910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-07-13 01:01:00.568916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-07-13 01:01:00.568922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.033 [2024-07-13 01:01:00.568936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.033 [2024-07-13 01:01:00.578846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.033 [2024-07-13 01:01:00.578897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.033 [2024-07-13 01:01:00.578911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.033 [2024-07-13 01:01:00.578917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.033 [2024-07-13 01:01:00.578923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.033 [2024-07-13 01:01:00.578936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.033 qpair failed and we were unable to recover it. 00:35:49.293 [2024-07-13 01:01:00.588893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.293 [2024-07-13 01:01:00.588971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.293 [2024-07-13 01:01:00.588986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.293 [2024-07-13 01:01:00.588992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.293 [2024-07-13 01:01:00.588999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.293 [2024-07-13 01:01:00.589013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.293 qpair failed and we were unable to recover it. 00:35:49.293 [2024-07-13 01:01:00.598919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.293 [2024-07-13 01:01:00.598985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.293 [2024-07-13 01:01:00.599000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.293 [2024-07-13 01:01:00.599009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.293 [2024-07-13 01:01:00.599015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.293 [2024-07-13 01:01:00.599028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.293 qpair failed and we were unable to recover it. 00:35:49.293 [2024-07-13 01:01:00.608946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.293 [2024-07-13 01:01:00.609002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.293 [2024-07-13 01:01:00.609017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.293 [2024-07-13 01:01:00.609023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.293 [2024-07-13 01:01:00.609029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.293 [2024-07-13 01:01:00.609042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.293 qpair failed and we were unable to recover it. 00:35:49.293 [2024-07-13 01:01:00.618953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.293 [2024-07-13 01:01:00.619009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.293 [2024-07-13 01:01:00.619023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.293 [2024-07-13 01:01:00.619030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.293 [2024-07-13 01:01:00.619035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.293 [2024-07-13 01:01:00.619049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.293 qpair failed and we were unable to recover it. 00:35:49.293 [2024-07-13 01:01:00.628985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.293 [2024-07-13 01:01:00.629040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.293 [2024-07-13 01:01:00.629054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.293 [2024-07-13 01:01:00.629061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.293 [2024-07-13 01:01:00.629066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.293 [2024-07-13 01:01:00.629080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.293 qpair failed and we were unable to recover it. 00:35:49.293 [2024-07-13 01:01:00.639022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.293 [2024-07-13 01:01:00.639079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.293 [2024-07-13 01:01:00.639094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.293 [2024-07-13 01:01:00.639101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.293 [2024-07-13 01:01:00.639106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.293 [2024-07-13 01:01:00.639120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.293 qpair failed and we were unable to recover it. 00:35:49.293 [2024-07-13 01:01:00.649037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.293 [2024-07-13 01:01:00.649111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.293 [2024-07-13 01:01:00.649125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.293 [2024-07-13 01:01:00.649132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.293 [2024-07-13 01:01:00.649138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.293 [2024-07-13 01:01:00.649151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.293 qpair failed and we were unable to recover it. 00:35:49.293 [2024-07-13 01:01:00.659127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.293 [2024-07-13 01:01:00.659213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.293 [2024-07-13 01:01:00.659231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.293 [2024-07-13 01:01:00.659238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.293 [2024-07-13 01:01:00.659243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.293 [2024-07-13 01:01:00.659256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.293 qpair failed and we were unable to recover it. 00:35:49.293 [2024-07-13 01:01:00.669092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.293 [2024-07-13 01:01:00.669149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.293 [2024-07-13 01:01:00.669164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.293 [2024-07-13 01:01:00.669170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.293 [2024-07-13 01:01:00.669177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.293 [2024-07-13 01:01:00.669190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.293 qpair failed and we were unable to recover it. 00:35:49.293 [2024-07-13 01:01:00.679135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.293 [2024-07-13 01:01:00.679186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.293 [2024-07-13 01:01:00.679201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.293 [2024-07-13 01:01:00.679207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.293 [2024-07-13 01:01:00.679214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.293 [2024-07-13 01:01:00.679231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.293 qpair failed and we were unable to recover it. 00:35:49.293 [2024-07-13 01:01:00.689162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.293 [2024-07-13 01:01:00.689247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.293 [2024-07-13 01:01:00.689264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.293 [2024-07-13 01:01:00.689274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.293 [2024-07-13 01:01:00.689280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.293 [2024-07-13 01:01:00.689295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.293 qpair failed and we were unable to recover it. 00:35:49.293 [2024-07-13 01:01:00.699186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.293 [2024-07-13 01:01:00.699245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.293 [2024-07-13 01:01:00.699260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.293 [2024-07-13 01:01:00.699267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.293 [2024-07-13 01:01:00.699274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.293 [2024-07-13 01:01:00.699288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.293 qpair failed and we were unable to recover it. 00:35:49.293 [2024-07-13 01:01:00.709233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.293 [2024-07-13 01:01:00.709290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.293 [2024-07-13 01:01:00.709305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.293 [2024-07-13 01:01:00.709312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.293 [2024-07-13 01:01:00.709318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.293 [2024-07-13 01:01:00.709332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.293 qpair failed and we were unable to recover it. 00:35:49.293 [2024-07-13 01:01:00.719265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.293 [2024-07-13 01:01:00.719329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.293 [2024-07-13 01:01:00.719343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.293 [2024-07-13 01:01:00.719349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.293 [2024-07-13 01:01:00.719355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.294 [2024-07-13 01:01:00.719369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.294 qpair failed and we were unable to recover it. 00:35:49.294 [2024-07-13 01:01:00.729283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.294 [2024-07-13 01:01:00.729338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.294 [2024-07-13 01:01:00.729353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.294 [2024-07-13 01:01:00.729360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.294 [2024-07-13 01:01:00.729365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.294 [2024-07-13 01:01:00.729379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.294 qpair failed and we were unable to recover it. 00:35:49.294 [2024-07-13 01:01:00.739301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.294 [2024-07-13 01:01:00.739359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.294 [2024-07-13 01:01:00.739374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.294 [2024-07-13 01:01:00.739381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.294 [2024-07-13 01:01:00.739386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.294 [2024-07-13 01:01:00.739400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.294 qpair failed and we were unable to recover it. 00:35:49.294 [2024-07-13 01:01:00.749266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.294 [2024-07-13 01:01:00.749360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.294 [2024-07-13 01:01:00.749375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.294 [2024-07-13 01:01:00.749381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.294 [2024-07-13 01:01:00.749387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.294 [2024-07-13 01:01:00.749401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.294 qpair failed and we were unable to recover it. 00:35:49.294 [2024-07-13 01:01:00.759361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.294 [2024-07-13 01:01:00.759414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.294 [2024-07-13 01:01:00.759428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.294 [2024-07-13 01:01:00.759435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.294 [2024-07-13 01:01:00.759440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.294 [2024-07-13 01:01:00.759454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.294 qpair failed and we were unable to recover it. 00:35:49.294 [2024-07-13 01:01:00.769350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.294 [2024-07-13 01:01:00.769406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.294 [2024-07-13 01:01:00.769421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.294 [2024-07-13 01:01:00.769427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.294 [2024-07-13 01:01:00.769433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.294 [2024-07-13 01:01:00.769447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.294 qpair failed and we were unable to recover it. 00:35:49.294 [2024-07-13 01:01:00.779422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.294 [2024-07-13 01:01:00.779476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.294 [2024-07-13 01:01:00.779490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.294 [2024-07-13 01:01:00.779500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.294 [2024-07-13 01:01:00.779505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.294 [2024-07-13 01:01:00.779519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.294 qpair failed and we were unable to recover it. 00:35:49.294 [2024-07-13 01:01:00.789455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.294 [2024-07-13 01:01:00.789508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.294 [2024-07-13 01:01:00.789523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.294 [2024-07-13 01:01:00.789529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.294 [2024-07-13 01:01:00.789535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.294 [2024-07-13 01:01:00.789548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.294 qpair failed and we were unable to recover it. 00:35:49.294 [2024-07-13 01:01:00.799499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.294 [2024-07-13 01:01:00.799552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.294 [2024-07-13 01:01:00.799567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.294 [2024-07-13 01:01:00.799573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.294 [2024-07-13 01:01:00.799579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.294 [2024-07-13 01:01:00.799592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.294 qpair failed and we were unable to recover it. 00:35:49.294 [2024-07-13 01:01:00.809510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.294 [2024-07-13 01:01:00.809564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.294 [2024-07-13 01:01:00.809578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.294 [2024-07-13 01:01:00.809585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.294 [2024-07-13 01:01:00.809590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.294 [2024-07-13 01:01:00.809603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.294 qpair failed and we were unable to recover it. 00:35:49.294 [2024-07-13 01:01:00.819540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.294 [2024-07-13 01:01:00.819591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.294 [2024-07-13 01:01:00.819605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.294 [2024-07-13 01:01:00.819612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.294 [2024-07-13 01:01:00.819618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.294 [2024-07-13 01:01:00.819631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.294 qpair failed and we were unable to recover it. 00:35:49.294 [2024-07-13 01:01:00.829579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.294 [2024-07-13 01:01:00.829631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.294 [2024-07-13 01:01:00.829645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.294 [2024-07-13 01:01:00.829652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.294 [2024-07-13 01:01:00.829657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.294 [2024-07-13 01:01:00.829671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.294 qpair failed and we were unable to recover it. 00:35:49.294 [2024-07-13 01:01:00.839602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.294 [2024-07-13 01:01:00.839681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.294 [2024-07-13 01:01:00.839696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.294 [2024-07-13 01:01:00.839702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.294 [2024-07-13 01:01:00.839708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.294 [2024-07-13 01:01:00.839721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.294 qpair failed and we were unable to recover it. 00:35:49.294 [2024-07-13 01:01:00.849651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.294 [2024-07-13 01:01:00.849715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.294 [2024-07-13 01:01:00.849729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.294 [2024-07-13 01:01:00.849735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.294 [2024-07-13 01:01:00.849741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.294 [2024-07-13 01:01:00.849754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.294 qpair failed and we were unable to recover it. 00:35:49.555 [2024-07-13 01:01:00.859697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.555 [2024-07-13 01:01:00.859750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.555 [2024-07-13 01:01:00.859764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.555 [2024-07-13 01:01:00.859771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.555 [2024-07-13 01:01:00.859777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.555 [2024-07-13 01:01:00.859791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-07-13 01:01:00.869744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.555 [2024-07-13 01:01:00.869856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.555 [2024-07-13 01:01:00.869874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.555 [2024-07-13 01:01:00.869881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.555 [2024-07-13 01:01:00.869886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.555 [2024-07-13 01:01:00.869901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-07-13 01:01:00.879758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.555 [2024-07-13 01:01:00.879817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.555 [2024-07-13 01:01:00.879831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.555 [2024-07-13 01:01:00.879838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.555 [2024-07-13 01:01:00.879843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.555 [2024-07-13 01:01:00.879858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-07-13 01:01:00.889746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.555 [2024-07-13 01:01:00.889801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.555 [2024-07-13 01:01:00.889816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.555 [2024-07-13 01:01:00.889822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.555 [2024-07-13 01:01:00.889828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.555 [2024-07-13 01:01:00.889842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-07-13 01:01:00.899775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.555 [2024-07-13 01:01:00.899826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.555 [2024-07-13 01:01:00.899840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.555 [2024-07-13 01:01:00.899847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.555 [2024-07-13 01:01:00.899853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.555 [2024-07-13 01:01:00.899867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-07-13 01:01:00.909804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.555 [2024-07-13 01:01:00.909862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.555 [2024-07-13 01:01:00.909876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.555 [2024-07-13 01:01:00.909883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.555 [2024-07-13 01:01:00.909889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.555 [2024-07-13 01:01:00.909905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-07-13 01:01:00.919844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.555 [2024-07-13 01:01:00.919938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.555 [2024-07-13 01:01:00.919952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.555 [2024-07-13 01:01:00.919959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.555 [2024-07-13 01:01:00.919964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.555 [2024-07-13 01:01:00.919977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-07-13 01:01:00.929851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.555 [2024-07-13 01:01:00.929906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.555 [2024-07-13 01:01:00.929920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.555 [2024-07-13 01:01:00.929926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.555 [2024-07-13 01:01:00.929932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.555 [2024-07-13 01:01:00.929945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-07-13 01:01:00.939885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.555 [2024-07-13 01:01:00.939938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.555 [2024-07-13 01:01:00.939952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.555 [2024-07-13 01:01:00.939959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.555 [2024-07-13 01:01:00.939964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.555 [2024-07-13 01:01:00.939978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-07-13 01:01:00.949916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.555 [2024-07-13 01:01:00.949970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.555 [2024-07-13 01:01:00.949984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.555 [2024-07-13 01:01:00.949991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.555 [2024-07-13 01:01:00.949996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.555 [2024-07-13 01:01:00.950009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-07-13 01:01:00.959891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.555 [2024-07-13 01:01:00.959945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.555 [2024-07-13 01:01:00.959963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.555 [2024-07-13 01:01:00.959970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.555 [2024-07-13 01:01:00.959975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.555 [2024-07-13 01:01:00.959989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-07-13 01:01:00.969952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.555 [2024-07-13 01:01:00.970010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.555 [2024-07-13 01:01:00.970025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.555 [2024-07-13 01:01:00.970031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.555 [2024-07-13 01:01:00.970037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.555 [2024-07-13 01:01:00.970051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-07-13 01:01:00.979992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.555 [2024-07-13 01:01:00.980046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.555 [2024-07-13 01:01:00.980060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.555 [2024-07-13 01:01:00.980067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.555 [2024-07-13 01:01:00.980073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.555 [2024-07-13 01:01:00.980086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-07-13 01:01:00.990058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.555 [2024-07-13 01:01:00.990108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.555 [2024-07-13 01:01:00.990123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.555 [2024-07-13 01:01:00.990129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.555 [2024-07-13 01:01:00.990135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.555 [2024-07-13 01:01:00.990149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.555 qpair failed and we were unable to recover it. 00:35:49.555 [2024-07-13 01:01:01.000129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.556 [2024-07-13 01:01:01.000206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.556 [2024-07-13 01:01:01.000221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.556 [2024-07-13 01:01:01.000232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.556 [2024-07-13 01:01:01.000238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.556 [2024-07-13 01:01:01.000255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-07-13 01:01:01.010082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.556 [2024-07-13 01:01:01.010139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.556 [2024-07-13 01:01:01.010154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.556 [2024-07-13 01:01:01.010160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.556 [2024-07-13 01:01:01.010166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.556 [2024-07-13 01:01:01.010180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-07-13 01:01:01.020111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.556 [2024-07-13 01:01:01.020169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.556 [2024-07-13 01:01:01.020183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.556 [2024-07-13 01:01:01.020190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.556 [2024-07-13 01:01:01.020196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.556 [2024-07-13 01:01:01.020209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-07-13 01:01:01.030145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.556 [2024-07-13 01:01:01.030198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.556 [2024-07-13 01:01:01.030212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.556 [2024-07-13 01:01:01.030219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.556 [2024-07-13 01:01:01.030229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.556 [2024-07-13 01:01:01.030243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-07-13 01:01:01.040183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.556 [2024-07-13 01:01:01.040243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.556 [2024-07-13 01:01:01.040258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.556 [2024-07-13 01:01:01.040265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.556 [2024-07-13 01:01:01.040271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.556 [2024-07-13 01:01:01.040284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-07-13 01:01:01.050204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.556 [2024-07-13 01:01:01.050264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.556 [2024-07-13 01:01:01.050281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.556 [2024-07-13 01:01:01.050287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.556 [2024-07-13 01:01:01.050292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.556 [2024-07-13 01:01:01.050306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-07-13 01:01:01.060242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.556 [2024-07-13 01:01:01.060295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.556 [2024-07-13 01:01:01.060309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.556 [2024-07-13 01:01:01.060316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.556 [2024-07-13 01:01:01.060322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.556 [2024-07-13 01:01:01.060335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-07-13 01:01:01.070282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.556 [2024-07-13 01:01:01.070359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.556 [2024-07-13 01:01:01.070373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.556 [2024-07-13 01:01:01.070379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.556 [2024-07-13 01:01:01.070384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.556 [2024-07-13 01:01:01.070398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-07-13 01:01:01.080302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.556 [2024-07-13 01:01:01.080365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.556 [2024-07-13 01:01:01.080379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.556 [2024-07-13 01:01:01.080386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.556 [2024-07-13 01:01:01.080392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.556 [2024-07-13 01:01:01.080405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-07-13 01:01:01.090292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.556 [2024-07-13 01:01:01.090347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.556 [2024-07-13 01:01:01.090362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.556 [2024-07-13 01:01:01.090368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.556 [2024-07-13 01:01:01.090374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.556 [2024-07-13 01:01:01.090391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-07-13 01:01:01.100349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.556 [2024-07-13 01:01:01.100403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.556 [2024-07-13 01:01:01.100418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.556 [2024-07-13 01:01:01.100424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.556 [2024-07-13 01:01:01.100430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.556 [2024-07-13 01:01:01.100443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.556 [2024-07-13 01:01:01.110414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.556 [2024-07-13 01:01:01.110466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.556 [2024-07-13 01:01:01.110481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.556 [2024-07-13 01:01:01.110487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.556 [2024-07-13 01:01:01.110493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.556 [2024-07-13 01:01:01.110507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.556 qpair failed and we were unable to recover it. 00:35:49.816 [2024-07-13 01:01:01.120425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.816 [2024-07-13 01:01:01.120484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.816 [2024-07-13 01:01:01.120498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.816 [2024-07-13 01:01:01.120505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.816 [2024-07-13 01:01:01.120511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.816 [2024-07-13 01:01:01.120525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-07-13 01:01:01.130446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.816 [2024-07-13 01:01:01.130499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.816 [2024-07-13 01:01:01.130514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.816 [2024-07-13 01:01:01.130521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.816 [2024-07-13 01:01:01.130526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.816 [2024-07-13 01:01:01.130539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-07-13 01:01:01.140469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.816 [2024-07-13 01:01:01.140524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.816 [2024-07-13 01:01:01.140545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.816 [2024-07-13 01:01:01.140551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.816 [2024-07-13 01:01:01.140557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.816 [2024-07-13 01:01:01.140570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-07-13 01:01:01.150490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.816 [2024-07-13 01:01:01.150540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.816 [2024-07-13 01:01:01.150555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.816 [2024-07-13 01:01:01.150561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.816 [2024-07-13 01:01:01.150567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.816 [2024-07-13 01:01:01.150580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.816 qpair failed and we were unable to recover it. 00:35:49.816 [2024-07-13 01:01:01.160526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.816 [2024-07-13 01:01:01.160586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.816 [2024-07-13 01:01:01.160600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.816 [2024-07-13 01:01:01.160606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.816 [2024-07-13 01:01:01.160612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.817 [2024-07-13 01:01:01.160626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-07-13 01:01:01.170547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.817 [2024-07-13 01:01:01.170600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.817 [2024-07-13 01:01:01.170615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.817 [2024-07-13 01:01:01.170622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.817 [2024-07-13 01:01:01.170628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.817 [2024-07-13 01:01:01.170641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-07-13 01:01:01.180589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.817 [2024-07-13 01:01:01.180642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.817 [2024-07-13 01:01:01.180657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.817 [2024-07-13 01:01:01.180663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.817 [2024-07-13 01:01:01.180672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.817 [2024-07-13 01:01:01.180686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-07-13 01:01:01.190613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.817 [2024-07-13 01:01:01.190666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.817 [2024-07-13 01:01:01.190680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.817 [2024-07-13 01:01:01.190687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.817 [2024-07-13 01:01:01.190693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.817 [2024-07-13 01:01:01.190707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-07-13 01:01:01.200642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.817 [2024-07-13 01:01:01.200698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.817 [2024-07-13 01:01:01.200713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.817 [2024-07-13 01:01:01.200719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.817 [2024-07-13 01:01:01.200725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.817 [2024-07-13 01:01:01.200738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-07-13 01:01:01.210652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.817 [2024-07-13 01:01:01.210705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.817 [2024-07-13 01:01:01.210719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.817 [2024-07-13 01:01:01.210726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.817 [2024-07-13 01:01:01.210732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.817 [2024-07-13 01:01:01.210745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-07-13 01:01:01.220620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.817 [2024-07-13 01:01:01.220677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.817 [2024-07-13 01:01:01.220691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.817 [2024-07-13 01:01:01.220698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.817 [2024-07-13 01:01:01.220704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.817 [2024-07-13 01:01:01.220718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-07-13 01:01:01.230714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.817 [2024-07-13 01:01:01.230771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.817 [2024-07-13 01:01:01.230785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.817 [2024-07-13 01:01:01.230791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.817 [2024-07-13 01:01:01.230797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.817 [2024-07-13 01:01:01.230810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-07-13 01:01:01.240783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.817 [2024-07-13 01:01:01.240835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.817 [2024-07-13 01:01:01.240849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.817 [2024-07-13 01:01:01.240855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.817 [2024-07-13 01:01:01.240861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.817 [2024-07-13 01:01:01.240873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-07-13 01:01:01.250705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.817 [2024-07-13 01:01:01.250759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.817 [2024-07-13 01:01:01.250775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.817 [2024-07-13 01:01:01.250781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.817 [2024-07-13 01:01:01.250787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.817 [2024-07-13 01:01:01.250801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-07-13 01:01:01.260811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.817 [2024-07-13 01:01:01.260863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.817 [2024-07-13 01:01:01.260878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.817 [2024-07-13 01:01:01.260885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.817 [2024-07-13 01:01:01.260890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.817 [2024-07-13 01:01:01.260904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-07-13 01:01:01.270836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.817 [2024-07-13 01:01:01.270890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.817 [2024-07-13 01:01:01.270904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.817 [2024-07-13 01:01:01.270910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.817 [2024-07-13 01:01:01.270920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.817 [2024-07-13 01:01:01.270933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-07-13 01:01:01.280903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.817 [2024-07-13 01:01:01.280959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.817 [2024-07-13 01:01:01.280973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.817 [2024-07-13 01:01:01.280980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.817 [2024-07-13 01:01:01.280986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.817 [2024-07-13 01:01:01.281000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-07-13 01:01:01.290876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.817 [2024-07-13 01:01:01.290934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.817 [2024-07-13 01:01:01.290948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.817 [2024-07-13 01:01:01.290955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.817 [2024-07-13 01:01:01.290961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.817 [2024-07-13 01:01:01.290974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-07-13 01:01:01.300922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.817 [2024-07-13 01:01:01.300977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.817 [2024-07-13 01:01:01.300992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.817 [2024-07-13 01:01:01.300998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.817 [2024-07-13 01:01:01.301004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.817 [2024-07-13 01:01:01.301018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.817 qpair failed and we were unable to recover it. 00:35:49.817 [2024-07-13 01:01:01.310953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.818 [2024-07-13 01:01:01.311006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.818 [2024-07-13 01:01:01.311020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.818 [2024-07-13 01:01:01.311027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.818 [2024-07-13 01:01:01.311033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.818 [2024-07-13 01:01:01.311046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-07-13 01:01:01.320992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.818 [2024-07-13 01:01:01.321052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.818 [2024-07-13 01:01:01.321066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.818 [2024-07-13 01:01:01.321073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.818 [2024-07-13 01:01:01.321078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.818 [2024-07-13 01:01:01.321091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-07-13 01:01:01.331001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.818 [2024-07-13 01:01:01.331059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.818 [2024-07-13 01:01:01.331074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.818 [2024-07-13 01:01:01.331080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.818 [2024-07-13 01:01:01.331086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.818 [2024-07-13 01:01:01.331099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-07-13 01:01:01.341071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.818 [2024-07-13 01:01:01.341122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.818 [2024-07-13 01:01:01.341136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.818 [2024-07-13 01:01:01.341143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.818 [2024-07-13 01:01:01.341149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.818 [2024-07-13 01:01:01.341162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-07-13 01:01:01.351105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.818 [2024-07-13 01:01:01.351160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.818 [2024-07-13 01:01:01.351175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.818 [2024-07-13 01:01:01.351181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.818 [2024-07-13 01:01:01.351187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.818 [2024-07-13 01:01:01.351200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-07-13 01:01:01.361098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.818 [2024-07-13 01:01:01.361154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.818 [2024-07-13 01:01:01.361169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.818 [2024-07-13 01:01:01.361178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.818 [2024-07-13 01:01:01.361184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.818 [2024-07-13 01:01:01.361197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.818 qpair failed and we were unable to recover it. 00:35:49.818 [2024-07-13 01:01:01.371122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:49.818 [2024-07-13 01:01:01.371177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:49.818 [2024-07-13 01:01:01.371192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:49.818 [2024-07-13 01:01:01.371199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:49.818 [2024-07-13 01:01:01.371204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:49.818 [2024-07-13 01:01:01.371218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:49.818 qpair failed and we were unable to recover it. 00:35:50.078 [2024-07-13 01:01:01.381201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.078 [2024-07-13 01:01:01.381261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.078 [2024-07-13 01:01:01.381276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.078 [2024-07-13 01:01:01.381283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.078 [2024-07-13 01:01:01.381289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.078 [2024-07-13 01:01:01.381303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.078 qpair failed and we were unable to recover it. 00:35:50.078 [2024-07-13 01:01:01.391198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.078 [2024-07-13 01:01:01.391259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.078 [2024-07-13 01:01:01.391274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.078 [2024-07-13 01:01:01.391280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.078 [2024-07-13 01:01:01.391286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.078 [2024-07-13 01:01:01.391299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.078 qpair failed and we were unable to recover it. 00:35:50.078 [2024-07-13 01:01:01.401211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.078 [2024-07-13 01:01:01.401274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.078 [2024-07-13 01:01:01.401288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.078 [2024-07-13 01:01:01.401294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.078 [2024-07-13 01:01:01.401300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.078 [2024-07-13 01:01:01.401313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.078 qpair failed and we were unable to recover it. 00:35:50.078 [2024-07-13 01:01:01.411243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.078 [2024-07-13 01:01:01.411297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.078 [2024-07-13 01:01:01.411311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.078 [2024-07-13 01:01:01.411317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.078 [2024-07-13 01:01:01.411323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.078 [2024-07-13 01:01:01.411337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.078 qpair failed and we were unable to recover it. 00:35:50.078 [2024-07-13 01:01:01.421294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.078 [2024-07-13 01:01:01.421350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.078 [2024-07-13 01:01:01.421364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.079 [2024-07-13 01:01:01.421371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.079 [2024-07-13 01:01:01.421376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.079 [2024-07-13 01:01:01.421391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.079 qpair failed and we were unable to recover it. 00:35:50.079 [2024-07-13 01:01:01.431302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.079 [2024-07-13 01:01:01.431355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.079 [2024-07-13 01:01:01.431369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.079 [2024-07-13 01:01:01.431376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.079 [2024-07-13 01:01:01.431382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.079 [2024-07-13 01:01:01.431395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.079 qpair failed and we were unable to recover it. 00:35:50.079 [2024-07-13 01:01:01.441336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.079 [2024-07-13 01:01:01.441389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.079 [2024-07-13 01:01:01.441405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.079 [2024-07-13 01:01:01.441411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.079 [2024-07-13 01:01:01.441417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.079 [2024-07-13 01:01:01.441430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.079 qpair failed and we were unable to recover it. 00:35:50.079 [2024-07-13 01:01:01.451349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.079 [2024-07-13 01:01:01.451405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.079 [2024-07-13 01:01:01.451420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.079 [2024-07-13 01:01:01.451429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.079 [2024-07-13 01:01:01.451435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.079 [2024-07-13 01:01:01.451449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.079 qpair failed and we were unable to recover it. 00:35:50.079 [2024-07-13 01:01:01.461380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.079 [2024-07-13 01:01:01.461434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.079 [2024-07-13 01:01:01.461448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.079 [2024-07-13 01:01:01.461454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.079 [2024-07-13 01:01:01.461460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.079 [2024-07-13 01:01:01.461474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.079 qpair failed and we were unable to recover it. 00:35:50.079 [2024-07-13 01:01:01.471376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.079 [2024-07-13 01:01:01.471430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.079 [2024-07-13 01:01:01.471445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.079 [2024-07-13 01:01:01.471452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.079 [2024-07-13 01:01:01.471457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.079 [2024-07-13 01:01:01.471471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.079 qpair failed and we were unable to recover it. 00:35:50.079 [2024-07-13 01:01:01.481432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.079 [2024-07-13 01:01:01.481501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.079 [2024-07-13 01:01:01.481517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.079 [2024-07-13 01:01:01.481523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.079 [2024-07-13 01:01:01.481529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.079 [2024-07-13 01:01:01.481542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.079 qpair failed and we were unable to recover it. 00:35:50.079 [2024-07-13 01:01:01.491448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.079 [2024-07-13 01:01:01.491507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.079 [2024-07-13 01:01:01.491522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.079 [2024-07-13 01:01:01.491528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.079 [2024-07-13 01:01:01.491534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.079 [2024-07-13 01:01:01.491547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.079 qpair failed and we were unable to recover it. 00:35:50.079 [2024-07-13 01:01:01.501494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.079 [2024-07-13 01:01:01.501552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.079 [2024-07-13 01:01:01.501568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.079 [2024-07-13 01:01:01.501575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.079 [2024-07-13 01:01:01.501581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.079 [2024-07-13 01:01:01.501595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.079 qpair failed and we were unable to recover it. 00:35:50.079 [2024-07-13 01:01:01.511500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.079 [2024-07-13 01:01:01.511560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.079 [2024-07-13 01:01:01.511575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.079 [2024-07-13 01:01:01.511581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.079 [2024-07-13 01:01:01.511587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.079 [2024-07-13 01:01:01.511601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.079 qpair failed and we were unable to recover it. 00:35:50.079 [2024-07-13 01:01:01.521555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.079 [2024-07-13 01:01:01.521611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.079 [2024-07-13 01:01:01.521626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.079 [2024-07-13 01:01:01.521632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.079 [2024-07-13 01:01:01.521638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.079 [2024-07-13 01:01:01.521651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.079 qpair failed and we were unable to recover it. 00:35:50.079 [2024-07-13 01:01:01.531582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.079 [2024-07-13 01:01:01.531639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.079 [2024-07-13 01:01:01.531654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.079 [2024-07-13 01:01:01.531660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.079 [2024-07-13 01:01:01.531666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.079 [2024-07-13 01:01:01.531679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.079 qpair failed and we were unable to recover it. 00:35:50.079 [2024-07-13 01:01:01.541549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.079 [2024-07-13 01:01:01.541600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.079 [2024-07-13 01:01:01.541614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.079 [2024-07-13 01:01:01.541624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.079 [2024-07-13 01:01:01.541630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.079 [2024-07-13 01:01:01.541643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.079 qpair failed and we were unable to recover it. 00:35:50.079 [2024-07-13 01:01:01.551641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.079 [2024-07-13 01:01:01.551696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.079 [2024-07-13 01:01:01.551711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.079 [2024-07-13 01:01:01.551717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.079 [2024-07-13 01:01:01.551723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.079 [2024-07-13 01:01:01.551736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.079 qpair failed and we were unable to recover it. 00:35:50.079 [2024-07-13 01:01:01.561605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.079 [2024-07-13 01:01:01.561661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.079 [2024-07-13 01:01:01.561676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.079 [2024-07-13 01:01:01.561682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.079 [2024-07-13 01:01:01.561688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.080 [2024-07-13 01:01:01.561701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.080 qpair failed and we were unable to recover it. 00:35:50.080 [2024-07-13 01:01:01.571629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.080 [2024-07-13 01:01:01.571681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.080 [2024-07-13 01:01:01.571696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.080 [2024-07-13 01:01:01.571702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.080 [2024-07-13 01:01:01.571708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.080 [2024-07-13 01:01:01.571721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.080 qpair failed and we were unable to recover it. 00:35:50.080 [2024-07-13 01:01:01.581662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.080 [2024-07-13 01:01:01.581746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.080 [2024-07-13 01:01:01.581760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.080 [2024-07-13 01:01:01.581766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.080 [2024-07-13 01:01:01.581772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.080 [2024-07-13 01:01:01.581785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.080 qpair failed and we were unable to recover it. 00:35:50.080 [2024-07-13 01:01:01.591746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.080 [2024-07-13 01:01:01.591800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.080 [2024-07-13 01:01:01.591815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.080 [2024-07-13 01:01:01.591821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.080 [2024-07-13 01:01:01.591827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.080 [2024-07-13 01:01:01.591840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.080 qpair failed and we were unable to recover it. 00:35:50.080 [2024-07-13 01:01:01.601787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.080 [2024-07-13 01:01:01.601849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.080 [2024-07-13 01:01:01.601864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.080 [2024-07-13 01:01:01.601870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.080 [2024-07-13 01:01:01.601876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.080 [2024-07-13 01:01:01.601889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.080 qpair failed and we were unable to recover it. 00:35:50.080 [2024-07-13 01:01:01.611808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.080 [2024-07-13 01:01:01.611866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.080 [2024-07-13 01:01:01.611880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.080 [2024-07-13 01:01:01.611887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.080 [2024-07-13 01:01:01.611893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.080 [2024-07-13 01:01:01.611906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.080 qpair failed and we were unable to recover it. 00:35:50.080 [2024-07-13 01:01:01.621782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.080 [2024-07-13 01:01:01.621835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.080 [2024-07-13 01:01:01.621850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.080 [2024-07-13 01:01:01.621856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.080 [2024-07-13 01:01:01.621862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.080 [2024-07-13 01:01:01.621875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.080 qpair failed and we were unable to recover it. 00:35:50.080 [2024-07-13 01:01:01.631821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.080 [2024-07-13 01:01:01.631878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.080 [2024-07-13 01:01:01.631896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.080 [2024-07-13 01:01:01.631903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.080 [2024-07-13 01:01:01.631908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.080 [2024-07-13 01:01:01.631922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.080 qpair failed and we were unable to recover it. 00:35:50.340 [2024-07-13 01:01:01.641864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.341 [2024-07-13 01:01:01.641923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.341 [2024-07-13 01:01:01.641937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.341 [2024-07-13 01:01:01.641944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.341 [2024-07-13 01:01:01.641949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.341 [2024-07-13 01:01:01.641963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.341 qpair failed and we were unable to recover it. 00:35:50.341 [2024-07-13 01:01:01.651928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.341 [2024-07-13 01:01:01.651985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.341 [2024-07-13 01:01:01.651999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.341 [2024-07-13 01:01:01.652006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.341 [2024-07-13 01:01:01.652012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.341 [2024-07-13 01:01:01.652025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.341 qpair failed and we were unable to recover it. 00:35:50.341 [2024-07-13 01:01:01.661947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.341 [2024-07-13 01:01:01.661999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.341 [2024-07-13 01:01:01.662014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.341 [2024-07-13 01:01:01.662021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.341 [2024-07-13 01:01:01.662027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.341 [2024-07-13 01:01:01.662040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.341 qpair failed and we were unable to recover it. 00:35:50.341 [2024-07-13 01:01:01.672024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.341 [2024-07-13 01:01:01.672077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.341 [2024-07-13 01:01:01.672092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.341 [2024-07-13 01:01:01.672099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.341 [2024-07-13 01:01:01.672105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.341 [2024-07-13 01:01:01.672118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.341 qpair failed and we were unable to recover it. 00:35:50.341 [2024-07-13 01:01:01.682051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.341 [2024-07-13 01:01:01.682184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.341 [2024-07-13 01:01:01.682251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.341 [2024-07-13 01:01:01.682259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.341 [2024-07-13 01:01:01.682265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.341 [2024-07-13 01:01:01.682281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.341 qpair failed and we were unable to recover it. 00:35:50.341 [2024-07-13 01:01:01.692034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.341 [2024-07-13 01:01:01.692092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.341 [2024-07-13 01:01:01.692108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.341 [2024-07-13 01:01:01.692115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.341 [2024-07-13 01:01:01.692121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.341 [2024-07-13 01:01:01.692135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.341 qpair failed and we were unable to recover it. 00:35:50.341 [2024-07-13 01:01:01.702013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.341 [2024-07-13 01:01:01.702070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.341 [2024-07-13 01:01:01.702085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.341 [2024-07-13 01:01:01.702091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.341 [2024-07-13 01:01:01.702097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.341 [2024-07-13 01:01:01.702110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.341 qpair failed and we were unable to recover it. 00:35:50.341 [2024-07-13 01:01:01.712103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.341 [2024-07-13 01:01:01.712159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.341 [2024-07-13 01:01:01.712174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.341 [2024-07-13 01:01:01.712180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.341 [2024-07-13 01:01:01.712186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.341 [2024-07-13 01:01:01.712200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.341 qpair failed and we were unable to recover it. 00:35:50.341 [2024-07-13 01:01:01.722171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.341 [2024-07-13 01:01:01.722230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.341 [2024-07-13 01:01:01.722248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.341 [2024-07-13 01:01:01.722255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.341 [2024-07-13 01:01:01.722260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.341 [2024-07-13 01:01:01.722274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.341 qpair failed and we were unable to recover it. 00:35:50.341 [2024-07-13 01:01:01.732148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.341 [2024-07-13 01:01:01.732202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.341 [2024-07-13 01:01:01.732216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.341 [2024-07-13 01:01:01.732223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.341 [2024-07-13 01:01:01.732232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.341 [2024-07-13 01:01:01.732246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.341 qpair failed and we were unable to recover it. 00:35:50.341 [2024-07-13 01:01:01.742192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.341 [2024-07-13 01:01:01.742243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.341 [2024-07-13 01:01:01.742258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.341 [2024-07-13 01:01:01.742265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.341 [2024-07-13 01:01:01.742271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.341 [2024-07-13 01:01:01.742284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.341 qpair failed and we were unable to recover it. 00:35:50.341 [2024-07-13 01:01:01.752174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.341 [2024-07-13 01:01:01.752232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.341 [2024-07-13 01:01:01.752247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.341 [2024-07-13 01:01:01.752254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.341 [2024-07-13 01:01:01.752259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.341 [2024-07-13 01:01:01.752273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.341 qpair failed and we were unable to recover it. 00:35:50.341 [2024-07-13 01:01:01.762296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.341 [2024-07-13 01:01:01.762351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.341 [2024-07-13 01:01:01.762366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.341 [2024-07-13 01:01:01.762372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.341 [2024-07-13 01:01:01.762378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.341 [2024-07-13 01:01:01.762395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.341 qpair failed and we were unable to recover it. 00:35:50.341 [2024-07-13 01:01:01.772206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.341 [2024-07-13 01:01:01.772304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.341 [2024-07-13 01:01:01.772321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.342 [2024-07-13 01:01:01.772328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.342 [2024-07-13 01:01:01.772334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.342 [2024-07-13 01:01:01.772348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.342 qpair failed and we were unable to recover it. 00:35:50.342 [2024-07-13 01:01:01.782319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.342 [2024-07-13 01:01:01.782375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.342 [2024-07-13 01:01:01.782390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.342 [2024-07-13 01:01:01.782398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.342 [2024-07-13 01:01:01.782404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.342 [2024-07-13 01:01:01.782418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.342 qpair failed and we were unable to recover it. 00:35:50.342 [2024-07-13 01:01:01.792333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.342 [2024-07-13 01:01:01.792422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.342 [2024-07-13 01:01:01.792438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.342 [2024-07-13 01:01:01.792444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.342 [2024-07-13 01:01:01.792450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.342 [2024-07-13 01:01:01.792464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.342 qpair failed and we were unable to recover it. 00:35:50.342 [2024-07-13 01:01:01.802308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.342 [2024-07-13 01:01:01.802368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.342 [2024-07-13 01:01:01.802384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.342 [2024-07-13 01:01:01.802390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.342 [2024-07-13 01:01:01.802396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.342 [2024-07-13 01:01:01.802410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.342 qpair failed and we were unable to recover it. 00:35:50.342 [2024-07-13 01:01:01.812422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.342 [2024-07-13 01:01:01.812477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.342 [2024-07-13 01:01:01.812496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.342 [2024-07-13 01:01:01.812503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.342 [2024-07-13 01:01:01.812508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.342 [2024-07-13 01:01:01.812522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.342 qpair failed and we were unable to recover it. 00:35:50.342 [2024-07-13 01:01:01.822402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.342 [2024-07-13 01:01:01.822461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.342 [2024-07-13 01:01:01.822475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.342 [2024-07-13 01:01:01.822482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.342 [2024-07-13 01:01:01.822488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.342 [2024-07-13 01:01:01.822502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.342 qpair failed and we were unable to recover it. 00:35:50.342 [2024-07-13 01:01:01.832437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.342 [2024-07-13 01:01:01.832493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.342 [2024-07-13 01:01:01.832508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.342 [2024-07-13 01:01:01.832515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.342 [2024-07-13 01:01:01.832520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.342 [2024-07-13 01:01:01.832534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.342 qpair failed and we were unable to recover it. 00:35:50.342 [2024-07-13 01:01:01.842494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.342 [2024-07-13 01:01:01.842552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.342 [2024-07-13 01:01:01.842566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.342 [2024-07-13 01:01:01.842573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.342 [2024-07-13 01:01:01.842579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.342 [2024-07-13 01:01:01.842592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.342 qpair failed and we were unable to recover it. 00:35:50.342 [2024-07-13 01:01:01.852506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.342 [2024-07-13 01:01:01.852555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.342 [2024-07-13 01:01:01.852570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.342 [2024-07-13 01:01:01.852576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.342 [2024-07-13 01:01:01.852583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.342 [2024-07-13 01:01:01.852599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.342 qpair failed and we were unable to recover it. 00:35:50.342 [2024-07-13 01:01:01.862569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.342 [2024-07-13 01:01:01.862625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.342 [2024-07-13 01:01:01.862640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.342 [2024-07-13 01:01:01.862646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.342 [2024-07-13 01:01:01.862652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.342 [2024-07-13 01:01:01.862665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.342 qpair failed and we were unable to recover it. 00:35:50.342 [2024-07-13 01:01:01.872539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.342 [2024-07-13 01:01:01.872595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.342 [2024-07-13 01:01:01.872610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.342 [2024-07-13 01:01:01.872616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.342 [2024-07-13 01:01:01.872622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.342 [2024-07-13 01:01:01.872635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.342 qpair failed and we were unable to recover it. 00:35:50.342 [2024-07-13 01:01:01.882626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.342 [2024-07-13 01:01:01.882700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.342 [2024-07-13 01:01:01.882714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.342 [2024-07-13 01:01:01.882720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.342 [2024-07-13 01:01:01.882726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.342 [2024-07-13 01:01:01.882739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.342 qpair failed and we were unable to recover it. 00:35:50.342 [2024-07-13 01:01:01.892580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.342 [2024-07-13 01:01:01.892637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.342 [2024-07-13 01:01:01.892651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.342 [2024-07-13 01:01:01.892658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.342 [2024-07-13 01:01:01.892663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.342 [2024-07-13 01:01:01.892677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.342 qpair failed and we were unable to recover it. 00:35:50.602 [2024-07-13 01:01:01.902669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.602 [2024-07-13 01:01:01.902724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.602 [2024-07-13 01:01:01.902745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.602 [2024-07-13 01:01:01.902751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.602 [2024-07-13 01:01:01.902757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.602 [2024-07-13 01:01:01.902770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.602 qpair failed and we were unable to recover it. 00:35:50.602 [2024-07-13 01:01:01.912752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.602 [2024-07-13 01:01:01.912806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.603 [2024-07-13 01:01:01.912821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.603 [2024-07-13 01:01:01.912827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.603 [2024-07-13 01:01:01.912833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.603 [2024-07-13 01:01:01.912847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-07-13 01:01:01.922787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.603 [2024-07-13 01:01:01.922839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.603 [2024-07-13 01:01:01.922854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.603 [2024-07-13 01:01:01.922860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.603 [2024-07-13 01:01:01.922866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.603 [2024-07-13 01:01:01.922879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-07-13 01:01:01.932752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.603 [2024-07-13 01:01:01.932807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.603 [2024-07-13 01:01:01.932822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.603 [2024-07-13 01:01:01.932828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.603 [2024-07-13 01:01:01.932834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.603 [2024-07-13 01:01:01.932848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-07-13 01:01:01.942801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.603 [2024-07-13 01:01:01.942857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.603 [2024-07-13 01:01:01.942871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.603 [2024-07-13 01:01:01.942878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.603 [2024-07-13 01:01:01.942887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.603 [2024-07-13 01:01:01.942900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-07-13 01:01:01.952818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.603 [2024-07-13 01:01:01.952871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.603 [2024-07-13 01:01:01.952886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.603 [2024-07-13 01:01:01.952892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.603 [2024-07-13 01:01:01.952898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.603 [2024-07-13 01:01:01.952911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-07-13 01:01:01.962914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.603 [2024-07-13 01:01:01.963018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.603 [2024-07-13 01:01:01.963032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.603 [2024-07-13 01:01:01.963038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.603 [2024-07-13 01:01:01.963044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.603 [2024-07-13 01:01:01.963057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-07-13 01:01:01.972872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.603 [2024-07-13 01:01:01.972927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.603 [2024-07-13 01:01:01.972942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.603 [2024-07-13 01:01:01.972948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.603 [2024-07-13 01:01:01.972955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.603 [2024-07-13 01:01:01.972968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-07-13 01:01:01.982917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.603 [2024-07-13 01:01:01.982974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.603 [2024-07-13 01:01:01.982989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.603 [2024-07-13 01:01:01.982996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.603 [2024-07-13 01:01:01.983002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.603 [2024-07-13 01:01:01.983015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-07-13 01:01:01.992879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.603 [2024-07-13 01:01:01.992931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.603 [2024-07-13 01:01:01.992946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.603 [2024-07-13 01:01:01.992952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.603 [2024-07-13 01:01:01.992958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.603 [2024-07-13 01:01:01.992971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-07-13 01:01:02.002971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.603 [2024-07-13 01:01:02.003030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.603 [2024-07-13 01:01:02.003045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.603 [2024-07-13 01:01:02.003052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.603 [2024-07-13 01:01:02.003058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.603 [2024-07-13 01:01:02.003071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-07-13 01:01:02.013004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.603 [2024-07-13 01:01:02.013061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.603 [2024-07-13 01:01:02.013076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.603 [2024-07-13 01:01:02.013082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.603 [2024-07-13 01:01:02.013088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.603 [2024-07-13 01:01:02.013102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-07-13 01:01:02.023052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.603 [2024-07-13 01:01:02.023112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.603 [2024-07-13 01:01:02.023128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.603 [2024-07-13 01:01:02.023134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.603 [2024-07-13 01:01:02.023141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.603 [2024-07-13 01:01:02.023155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-07-13 01:01:02.033039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.603 [2024-07-13 01:01:02.033094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.603 [2024-07-13 01:01:02.033108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.603 [2024-07-13 01:01:02.033115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.603 [2024-07-13 01:01:02.033124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.603 [2024-07-13 01:01:02.033137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-07-13 01:01:02.043096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.603 [2024-07-13 01:01:02.043150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.603 [2024-07-13 01:01:02.043165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.603 [2024-07-13 01:01:02.043171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.603 [2024-07-13 01:01:02.043177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.603 [2024-07-13 01:01:02.043190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.603 qpair failed and we were unable to recover it. 00:35:50.603 [2024-07-13 01:01:02.053108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.603 [2024-07-13 01:01:02.053162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.603 [2024-07-13 01:01:02.053177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.603 [2024-07-13 01:01:02.053183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.603 [2024-07-13 01:01:02.053189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.603 [2024-07-13 01:01:02.053203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-07-13 01:01:02.063159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.604 [2024-07-13 01:01:02.063214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.604 [2024-07-13 01:01:02.063233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.604 [2024-07-13 01:01:02.063239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.604 [2024-07-13 01:01:02.063246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.604 [2024-07-13 01:01:02.063259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-07-13 01:01:02.073160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.604 [2024-07-13 01:01:02.073211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.604 [2024-07-13 01:01:02.073230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.604 [2024-07-13 01:01:02.073237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.604 [2024-07-13 01:01:02.073243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.604 [2024-07-13 01:01:02.073257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-07-13 01:01:02.083217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.604 [2024-07-13 01:01:02.083276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.604 [2024-07-13 01:01:02.083290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.604 [2024-07-13 01:01:02.083296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.604 [2024-07-13 01:01:02.083302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.604 [2024-07-13 01:01:02.083315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-07-13 01:01:02.093240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.604 [2024-07-13 01:01:02.093299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.604 [2024-07-13 01:01:02.093313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.604 [2024-07-13 01:01:02.093320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.604 [2024-07-13 01:01:02.093325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.604 [2024-07-13 01:01:02.093338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-07-13 01:01:02.103281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.604 [2024-07-13 01:01:02.103329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.604 [2024-07-13 01:01:02.103344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.604 [2024-07-13 01:01:02.103350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.604 [2024-07-13 01:01:02.103356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.604 [2024-07-13 01:01:02.103369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-07-13 01:01:02.113303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.604 [2024-07-13 01:01:02.113360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.604 [2024-07-13 01:01:02.113375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.604 [2024-07-13 01:01:02.113382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.604 [2024-07-13 01:01:02.113387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.604 [2024-07-13 01:01:02.113401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-07-13 01:01:02.123348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.604 [2024-07-13 01:01:02.123400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.604 [2024-07-13 01:01:02.123415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.604 [2024-07-13 01:01:02.123421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.604 [2024-07-13 01:01:02.123431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.604 [2024-07-13 01:01:02.123444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-07-13 01:01:02.133362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.604 [2024-07-13 01:01:02.133415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.604 [2024-07-13 01:01:02.133429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.604 [2024-07-13 01:01:02.133435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.604 [2024-07-13 01:01:02.133441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.604 [2024-07-13 01:01:02.133454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-07-13 01:01:02.143393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.604 [2024-07-13 01:01:02.143445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.604 [2024-07-13 01:01:02.143459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.604 [2024-07-13 01:01:02.143466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.604 [2024-07-13 01:01:02.143472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.604 [2024-07-13 01:01:02.143486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.604 [2024-07-13 01:01:02.153396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.604 [2024-07-13 01:01:02.153453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.604 [2024-07-13 01:01:02.153467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.604 [2024-07-13 01:01:02.153474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.604 [2024-07-13 01:01:02.153479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.604 [2024-07-13 01:01:02.153493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.604 qpair failed and we were unable to recover it. 00:35:50.864 [2024-07-13 01:01:02.163372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.864 [2024-07-13 01:01:02.163427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.864 [2024-07-13 01:01:02.163441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.864 [2024-07-13 01:01:02.163448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.864 [2024-07-13 01:01:02.163454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.864 [2024-07-13 01:01:02.163467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.864 qpair failed and we were unable to recover it. 00:35:50.864 [2024-07-13 01:01:02.173505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.864 [2024-07-13 01:01:02.173565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.864 [2024-07-13 01:01:02.173579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.864 [2024-07-13 01:01:02.173586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.864 [2024-07-13 01:01:02.173591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.864 [2024-07-13 01:01:02.173605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.864 qpair failed and we were unable to recover it. 00:35:50.864 [2024-07-13 01:01:02.183499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.864 [2024-07-13 01:01:02.183552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.864 [2024-07-13 01:01:02.183566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.864 [2024-07-13 01:01:02.183573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.864 [2024-07-13 01:01:02.183579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.864 [2024-07-13 01:01:02.183592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.864 qpair failed and we were unable to recover it. 00:35:50.864 [2024-07-13 01:01:02.193528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.865 [2024-07-13 01:01:02.193578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.865 [2024-07-13 01:01:02.193593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.865 [2024-07-13 01:01:02.193599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.865 [2024-07-13 01:01:02.193605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.865 [2024-07-13 01:01:02.193619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.865 qpair failed and we were unable to recover it. 00:35:50.865 [2024-07-13 01:01:02.203563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.865 [2024-07-13 01:01:02.203619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.865 [2024-07-13 01:01:02.203633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.865 [2024-07-13 01:01:02.203639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.865 [2024-07-13 01:01:02.203645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.865 [2024-07-13 01:01:02.203658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.865 qpair failed and we were unable to recover it. 00:35:50.865 [2024-07-13 01:01:02.213623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.865 [2024-07-13 01:01:02.213677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.865 [2024-07-13 01:01:02.213691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.865 [2024-07-13 01:01:02.213701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.865 [2024-07-13 01:01:02.213707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.865 [2024-07-13 01:01:02.213720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.865 qpair failed and we were unable to recover it. 00:35:50.865 [2024-07-13 01:01:02.223625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.865 [2024-07-13 01:01:02.223678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.865 [2024-07-13 01:01:02.223692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.865 [2024-07-13 01:01:02.223699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.865 [2024-07-13 01:01:02.223704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.865 [2024-07-13 01:01:02.223718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.865 qpair failed and we were unable to recover it. 00:35:50.865 [2024-07-13 01:01:02.233649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.865 [2024-07-13 01:01:02.233699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.865 [2024-07-13 01:01:02.233713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.865 [2024-07-13 01:01:02.233720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.865 [2024-07-13 01:01:02.233725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.865 [2024-07-13 01:01:02.233738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.865 qpair failed and we were unable to recover it. 00:35:50.865 [2024-07-13 01:01:02.243670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.865 [2024-07-13 01:01:02.243725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.865 [2024-07-13 01:01:02.243739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.865 [2024-07-13 01:01:02.243746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.865 [2024-07-13 01:01:02.243752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.865 [2024-07-13 01:01:02.243765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.865 qpair failed and we were unable to recover it. 00:35:50.865 [2024-07-13 01:01:02.253735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.865 [2024-07-13 01:01:02.253791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.865 [2024-07-13 01:01:02.253806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.865 [2024-07-13 01:01:02.253813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.865 [2024-07-13 01:01:02.253818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.865 [2024-07-13 01:01:02.253832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.865 qpair failed and we were unable to recover it. 00:35:50.865 [2024-07-13 01:01:02.263750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.865 [2024-07-13 01:01:02.263815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.865 [2024-07-13 01:01:02.263829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.865 [2024-07-13 01:01:02.263835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.865 [2024-07-13 01:01:02.263841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.865 [2024-07-13 01:01:02.263854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.865 qpair failed and we were unable to recover it. 00:35:50.865 [2024-07-13 01:01:02.273763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.865 [2024-07-13 01:01:02.273814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.865 [2024-07-13 01:01:02.273829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.865 [2024-07-13 01:01:02.273835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.865 [2024-07-13 01:01:02.273841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.865 [2024-07-13 01:01:02.273854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.865 qpair failed and we were unable to recover it. 00:35:50.865 [2024-07-13 01:01:02.283731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.865 [2024-07-13 01:01:02.283786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.865 [2024-07-13 01:01:02.283800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.865 [2024-07-13 01:01:02.283807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.865 [2024-07-13 01:01:02.283813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.865 [2024-07-13 01:01:02.283826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.865 qpair failed and we were unable to recover it. 00:35:50.865 [2024-07-13 01:01:02.293811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.865 [2024-07-13 01:01:02.293866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.865 [2024-07-13 01:01:02.293880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.865 [2024-07-13 01:01:02.293887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.865 [2024-07-13 01:01:02.293892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.865 [2024-07-13 01:01:02.293905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.865 qpair failed and we were unable to recover it. 00:35:50.865 [2024-07-13 01:01:02.303841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.865 [2024-07-13 01:01:02.303898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.865 [2024-07-13 01:01:02.303914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.865 [2024-07-13 01:01:02.303924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.865 [2024-07-13 01:01:02.303929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.865 [2024-07-13 01:01:02.303943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.865 qpair failed and we were unable to recover it. 00:35:50.865 [2024-07-13 01:01:02.313909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.865 [2024-07-13 01:01:02.313964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.865 [2024-07-13 01:01:02.313979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.865 [2024-07-13 01:01:02.313985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.865 [2024-07-13 01:01:02.313991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.865 [2024-07-13 01:01:02.314004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.865 qpair failed and we were unable to recover it. 00:35:50.865 [2024-07-13 01:01:02.323901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.865 [2024-07-13 01:01:02.323957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.865 [2024-07-13 01:01:02.323971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.865 [2024-07-13 01:01:02.323978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.865 [2024-07-13 01:01:02.323983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.865 [2024-07-13 01:01:02.323997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.865 qpair failed and we were unable to recover it. 00:35:50.865 [2024-07-13 01:01:02.333961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.865 [2024-07-13 01:01:02.334014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.866 [2024-07-13 01:01:02.334029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.866 [2024-07-13 01:01:02.334035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.866 [2024-07-13 01:01:02.334041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.866 [2024-07-13 01:01:02.334054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.866 qpair failed and we were unable to recover it. 00:35:50.866 [2024-07-13 01:01:02.343955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.866 [2024-07-13 01:01:02.344009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.866 [2024-07-13 01:01:02.344024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.866 [2024-07-13 01:01:02.344031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.866 [2024-07-13 01:01:02.344036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.866 [2024-07-13 01:01:02.344050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.866 qpair failed and we were unable to recover it. 00:35:50.866 [2024-07-13 01:01:02.353991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.866 [2024-07-13 01:01:02.354056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.866 [2024-07-13 01:01:02.354071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.866 [2024-07-13 01:01:02.354078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.866 [2024-07-13 01:01:02.354083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.866 [2024-07-13 01:01:02.354096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.866 qpair failed and we were unable to recover it. 00:35:50.866 [2024-07-13 01:01:02.364022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.866 [2024-07-13 01:01:02.364075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.866 [2024-07-13 01:01:02.364090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.866 [2024-07-13 01:01:02.364097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.866 [2024-07-13 01:01:02.364102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.866 [2024-07-13 01:01:02.364116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.866 qpair failed and we were unable to recover it. 00:35:50.866 [2024-07-13 01:01:02.374051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.866 [2024-07-13 01:01:02.374110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.866 [2024-07-13 01:01:02.374125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.866 [2024-07-13 01:01:02.374131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.866 [2024-07-13 01:01:02.374137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.866 [2024-07-13 01:01:02.374151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.866 qpair failed and we were unable to recover it. 00:35:50.866 [2024-07-13 01:01:02.384072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.866 [2024-07-13 01:01:02.384127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.866 [2024-07-13 01:01:02.384142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.866 [2024-07-13 01:01:02.384148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.866 [2024-07-13 01:01:02.384154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.866 [2024-07-13 01:01:02.384168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.866 qpair failed and we were unable to recover it. 00:35:50.866 [2024-07-13 01:01:02.394124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.866 [2024-07-13 01:01:02.394201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.866 [2024-07-13 01:01:02.394215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.866 [2024-07-13 01:01:02.394229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.866 [2024-07-13 01:01:02.394235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.866 [2024-07-13 01:01:02.394248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.866 qpair failed and we were unable to recover it. 00:35:50.866 [2024-07-13 01:01:02.404139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.866 [2024-07-13 01:01:02.404192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.866 [2024-07-13 01:01:02.404207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.866 [2024-07-13 01:01:02.404213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.866 [2024-07-13 01:01:02.404218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.866 [2024-07-13 01:01:02.404242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.866 qpair failed and we were unable to recover it. 00:35:50.866 [2024-07-13 01:01:02.414153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:50.866 [2024-07-13 01:01:02.414207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:50.866 [2024-07-13 01:01:02.414221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:50.866 [2024-07-13 01:01:02.414230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:50.866 [2024-07-13 01:01:02.414236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:50.866 [2024-07-13 01:01:02.414249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:50.866 qpair failed and we were unable to recover it. 00:35:51.126 [2024-07-13 01:01:02.424198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.126 [2024-07-13 01:01:02.424257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.126 [2024-07-13 01:01:02.424272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.126 [2024-07-13 01:01:02.424279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.126 [2024-07-13 01:01:02.424285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.126 [2024-07-13 01:01:02.424299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.126 qpair failed and we were unable to recover it. 00:35:51.126 [2024-07-13 01:01:02.434167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.126 [2024-07-13 01:01:02.434217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.126 [2024-07-13 01:01:02.434235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.126 [2024-07-13 01:01:02.434241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.126 [2024-07-13 01:01:02.434247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.126 [2024-07-13 01:01:02.434261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.126 qpair failed and we were unable to recover it. 00:35:51.126 [2024-07-13 01:01:02.444261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.126 [2024-07-13 01:01:02.444314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.126 [2024-07-13 01:01:02.444329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.127 [2024-07-13 01:01:02.444335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.127 [2024-07-13 01:01:02.444341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.127 [2024-07-13 01:01:02.444355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.127 qpair failed and we were unable to recover it. 00:35:51.127 [2024-07-13 01:01:02.454276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.127 [2024-07-13 01:01:02.454332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.127 [2024-07-13 01:01:02.454347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.127 [2024-07-13 01:01:02.454353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.127 [2024-07-13 01:01:02.454359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.127 [2024-07-13 01:01:02.454373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.127 qpair failed and we were unable to recover it. 00:35:51.127 [2024-07-13 01:01:02.464314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.127 [2024-07-13 01:01:02.464365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.127 [2024-07-13 01:01:02.464380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.127 [2024-07-13 01:01:02.464386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.127 [2024-07-13 01:01:02.464392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.127 [2024-07-13 01:01:02.464405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.127 qpair failed and we were unable to recover it. 00:35:51.127 [2024-07-13 01:01:02.474337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.127 [2024-07-13 01:01:02.474386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.127 [2024-07-13 01:01:02.474400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.127 [2024-07-13 01:01:02.474406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.127 [2024-07-13 01:01:02.474412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.127 [2024-07-13 01:01:02.474426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.127 qpair failed and we were unable to recover it. 00:35:51.127 [2024-07-13 01:01:02.484375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.127 [2024-07-13 01:01:02.484426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.127 [2024-07-13 01:01:02.484443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.127 [2024-07-13 01:01:02.484450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.127 [2024-07-13 01:01:02.484455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.127 [2024-07-13 01:01:02.484469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.127 qpair failed and we were unable to recover it. 00:35:51.127 [2024-07-13 01:01:02.494399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.127 [2024-07-13 01:01:02.494498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.127 [2024-07-13 01:01:02.494513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.127 [2024-07-13 01:01:02.494519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.127 [2024-07-13 01:01:02.494525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.127 [2024-07-13 01:01:02.494537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.127 qpair failed and we were unable to recover it. 00:35:51.127 [2024-07-13 01:01:02.504350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.127 [2024-07-13 01:01:02.504407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.127 [2024-07-13 01:01:02.504422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.127 [2024-07-13 01:01:02.504429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.127 [2024-07-13 01:01:02.504434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.127 [2024-07-13 01:01:02.504448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.127 qpair failed and we were unable to recover it. 00:35:51.127 [2024-07-13 01:01:02.514454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.127 [2024-07-13 01:01:02.514508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.127 [2024-07-13 01:01:02.514522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.127 [2024-07-13 01:01:02.514529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.127 [2024-07-13 01:01:02.514534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.127 [2024-07-13 01:01:02.514547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.127 qpair failed and we were unable to recover it. 00:35:51.127 [2024-07-13 01:01:02.524494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.127 [2024-07-13 01:01:02.524548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.127 [2024-07-13 01:01:02.524562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.127 [2024-07-13 01:01:02.524569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.127 [2024-07-13 01:01:02.524574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.127 [2024-07-13 01:01:02.524591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.127 qpair failed and we were unable to recover it. 00:35:51.127 [2024-07-13 01:01:02.534508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.127 [2024-07-13 01:01:02.534568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.127 [2024-07-13 01:01:02.534582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.127 [2024-07-13 01:01:02.534588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.127 [2024-07-13 01:01:02.534594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.127 [2024-07-13 01:01:02.534608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.127 qpair failed and we were unable to recover it. 00:35:51.127 [2024-07-13 01:01:02.544535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.127 [2024-07-13 01:01:02.544581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.127 [2024-07-13 01:01:02.544595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.127 [2024-07-13 01:01:02.544601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.127 [2024-07-13 01:01:02.544607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.127 [2024-07-13 01:01:02.544621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.127 qpair failed and we were unable to recover it. 00:35:51.127 [2024-07-13 01:01:02.554576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.127 [2024-07-13 01:01:02.554626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.127 [2024-07-13 01:01:02.554640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.127 [2024-07-13 01:01:02.554647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.127 [2024-07-13 01:01:02.554652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.127 [2024-07-13 01:01:02.554665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.127 qpair failed and we were unable to recover it. 00:35:51.127 [2024-07-13 01:01:02.564541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.127 [2024-07-13 01:01:02.564594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.127 [2024-07-13 01:01:02.564608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.127 [2024-07-13 01:01:02.564614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.127 [2024-07-13 01:01:02.564620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.127 [2024-07-13 01:01:02.564633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.127 qpair failed and we were unable to recover it. 00:35:51.127 [2024-07-13 01:01:02.574620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.127 [2024-07-13 01:01:02.574674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.127 [2024-07-13 01:01:02.574691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.127 [2024-07-13 01:01:02.574697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.127 [2024-07-13 01:01:02.574703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.127 [2024-07-13 01:01:02.574716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.127 qpair failed and we were unable to recover it. 00:35:51.127 [2024-07-13 01:01:02.584666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.127 [2024-07-13 01:01:02.584714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.127 [2024-07-13 01:01:02.584729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.127 [2024-07-13 01:01:02.584736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.127 [2024-07-13 01:01:02.584741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.127 [2024-07-13 01:01:02.584754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.127 qpair failed and we were unable to recover it. 00:35:51.128 [2024-07-13 01:01:02.594763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.128 [2024-07-13 01:01:02.594868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.128 [2024-07-13 01:01:02.594882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.128 [2024-07-13 01:01:02.594889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.128 [2024-07-13 01:01:02.594894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.128 [2024-07-13 01:01:02.594907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-07-13 01:01:02.604745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.128 [2024-07-13 01:01:02.604804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.128 [2024-07-13 01:01:02.604818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.128 [2024-07-13 01:01:02.604824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.128 [2024-07-13 01:01:02.604830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.128 [2024-07-13 01:01:02.604843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-07-13 01:01:02.614755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.128 [2024-07-13 01:01:02.614851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.128 [2024-07-13 01:01:02.614865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.128 [2024-07-13 01:01:02.614871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.128 [2024-07-13 01:01:02.614877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.128 [2024-07-13 01:01:02.614894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-07-13 01:01:02.624787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.128 [2024-07-13 01:01:02.624840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.128 [2024-07-13 01:01:02.624854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.128 [2024-07-13 01:01:02.624860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.128 [2024-07-13 01:01:02.624866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.128 [2024-07-13 01:01:02.624879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-07-13 01:01:02.634857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.128 [2024-07-13 01:01:02.634906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.128 [2024-07-13 01:01:02.634920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.128 [2024-07-13 01:01:02.634926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.128 [2024-07-13 01:01:02.634932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.128 [2024-07-13 01:01:02.634945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-07-13 01:01:02.644885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.128 [2024-07-13 01:01:02.644941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.128 [2024-07-13 01:01:02.644955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.128 [2024-07-13 01:01:02.644961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.128 [2024-07-13 01:01:02.644967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.128 [2024-07-13 01:01:02.644980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-07-13 01:01:02.654890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.128 [2024-07-13 01:01:02.654939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.128 [2024-07-13 01:01:02.654953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.128 [2024-07-13 01:01:02.654960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.128 [2024-07-13 01:01:02.654965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.128 [2024-07-13 01:01:02.654978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-07-13 01:01:02.664910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.128 [2024-07-13 01:01:02.664982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.128 [2024-07-13 01:01:02.665003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.128 [2024-07-13 01:01:02.665009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.128 [2024-07-13 01:01:02.665015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.128 [2024-07-13 01:01:02.665028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.128 [2024-07-13 01:01:02.674953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.128 [2024-07-13 01:01:02.675010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.128 [2024-07-13 01:01:02.675025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.128 [2024-07-13 01:01:02.675031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.128 [2024-07-13 01:01:02.675037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.128 [2024-07-13 01:01:02.675050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.128 qpair failed and we were unable to recover it. 00:35:51.388 [2024-07-13 01:01:02.684978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.388 [2024-07-13 01:01:02.685032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.388 [2024-07-13 01:01:02.685050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.388 [2024-07-13 01:01:02.685056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.388 [2024-07-13 01:01:02.685063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.388 [2024-07-13 01:01:02.685077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.388 qpair failed and we were unable to recover it. 00:35:51.388 [2024-07-13 01:01:02.695006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.388 [2024-07-13 01:01:02.695056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.388 [2024-07-13 01:01:02.695072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.388 [2024-07-13 01:01:02.695078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.388 [2024-07-13 01:01:02.695084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.388 [2024-07-13 01:01:02.695098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.388 qpair failed and we were unable to recover it. 00:35:51.388 [2024-07-13 01:01:02.705099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.388 [2024-07-13 01:01:02.705203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.388 [2024-07-13 01:01:02.705218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.388 [2024-07-13 01:01:02.705229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.388 [2024-07-13 01:01:02.705235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.388 [2024-07-13 01:01:02.705251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.388 qpair failed and we were unable to recover it. 00:35:51.388 [2024-07-13 01:01:02.715067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.388 [2024-07-13 01:01:02.715120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.388 [2024-07-13 01:01:02.715135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.388 [2024-07-13 01:01:02.715142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.388 [2024-07-13 01:01:02.715148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.388 [2024-07-13 01:01:02.715161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.388 qpair failed and we were unable to recover it. 00:35:51.388 [2024-07-13 01:01:02.725095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.388 [2024-07-13 01:01:02.725150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.388 [2024-07-13 01:01:02.725165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.388 [2024-07-13 01:01:02.725172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.388 [2024-07-13 01:01:02.725177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.388 [2024-07-13 01:01:02.725191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.389 qpair failed and we were unable to recover it. 00:35:51.389 [2024-07-13 01:01:02.735122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.389 [2024-07-13 01:01:02.735176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.389 [2024-07-13 01:01:02.735190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.389 [2024-07-13 01:01:02.735196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.389 [2024-07-13 01:01:02.735202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.389 [2024-07-13 01:01:02.735215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.389 qpair failed and we were unable to recover it. 00:35:51.389 [2024-07-13 01:01:02.745157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.389 [2024-07-13 01:01:02.745209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.389 [2024-07-13 01:01:02.745223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.389 [2024-07-13 01:01:02.745234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.389 [2024-07-13 01:01:02.745240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.389 [2024-07-13 01:01:02.745253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.389 qpair failed and we were unable to recover it. 00:35:51.389 [2024-07-13 01:01:02.755187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.389 [2024-07-13 01:01:02.755265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.389 [2024-07-13 01:01:02.755283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.389 [2024-07-13 01:01:02.755290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.389 [2024-07-13 01:01:02.755295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.389 [2024-07-13 01:01:02.755309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.389 qpair failed and we were unable to recover it. 00:35:51.389 [2024-07-13 01:01:02.765230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.389 [2024-07-13 01:01:02.765304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.389 [2024-07-13 01:01:02.765319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.389 [2024-07-13 01:01:02.765326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.389 [2024-07-13 01:01:02.765332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.389 [2024-07-13 01:01:02.765346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.389 qpair failed and we were unable to recover it. 00:35:51.389 [2024-07-13 01:01:02.775237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.389 [2024-07-13 01:01:02.775288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.389 [2024-07-13 01:01:02.775303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.389 [2024-07-13 01:01:02.775310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.389 [2024-07-13 01:01:02.775315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.389 [2024-07-13 01:01:02.775329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.389 qpair failed and we were unable to recover it. 00:35:51.389 [2024-07-13 01:01:02.785185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.389 [2024-07-13 01:01:02.785243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.389 [2024-07-13 01:01:02.785258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.389 [2024-07-13 01:01:02.785264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.389 [2024-07-13 01:01:02.785271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.389 [2024-07-13 01:01:02.785284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.389 qpair failed and we were unable to recover it. 00:35:51.389 [2024-07-13 01:01:02.795345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.389 [2024-07-13 01:01:02.795397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.389 [2024-07-13 01:01:02.795411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.389 [2024-07-13 01:01:02.795417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.389 [2024-07-13 01:01:02.795426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.389 [2024-07-13 01:01:02.795440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.389 qpair failed and we were unable to recover it. 00:35:51.389 [2024-07-13 01:01:02.805328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.389 [2024-07-13 01:01:02.805381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.389 [2024-07-13 01:01:02.805396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.389 [2024-07-13 01:01:02.805402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.389 [2024-07-13 01:01:02.805408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.389 [2024-07-13 01:01:02.805422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.389 qpair failed and we were unable to recover it. 00:35:51.389 [2024-07-13 01:01:02.815345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.389 [2024-07-13 01:01:02.815402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.389 [2024-07-13 01:01:02.815416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.389 [2024-07-13 01:01:02.815423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.389 [2024-07-13 01:01:02.815428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.389 [2024-07-13 01:01:02.815442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.389 qpair failed and we were unable to recover it. 00:35:51.389 [2024-07-13 01:01:02.825413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.389 [2024-07-13 01:01:02.825465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.389 [2024-07-13 01:01:02.825481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.389 [2024-07-13 01:01:02.825489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.389 [2024-07-13 01:01:02.825495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.389 [2024-07-13 01:01:02.825508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.389 qpair failed and we were unable to recover it. 00:35:51.389 [2024-07-13 01:01:02.835401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.389 [2024-07-13 01:01:02.835454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.389 [2024-07-13 01:01:02.835469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.389 [2024-07-13 01:01:02.835476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.389 [2024-07-13 01:01:02.835481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.389 [2024-07-13 01:01:02.835495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.389 qpair failed and we were unable to recover it. 00:35:51.389 [2024-07-13 01:01:02.845365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.389 [2024-07-13 01:01:02.845425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.389 [2024-07-13 01:01:02.845439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.389 [2024-07-13 01:01:02.845447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.389 [2024-07-13 01:01:02.845452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.389 [2024-07-13 01:01:02.845466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.389 qpair failed and we were unable to recover it. 00:35:51.389 [2024-07-13 01:01:02.855379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.389 [2024-07-13 01:01:02.855434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.389 [2024-07-13 01:01:02.855448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.389 [2024-07-13 01:01:02.855455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.389 [2024-07-13 01:01:02.855461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.389 [2024-07-13 01:01:02.855474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.389 qpair failed and we were unable to recover it. 00:35:51.389 [2024-07-13 01:01:02.865486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.389 [2024-07-13 01:01:02.865538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.389 [2024-07-13 01:01:02.865552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.389 [2024-07-13 01:01:02.865558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.389 [2024-07-13 01:01:02.865564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.389 [2024-07-13 01:01:02.865578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.389 qpair failed and we were unable to recover it. 00:35:51.389 [2024-07-13 01:01:02.875507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.390 [2024-07-13 01:01:02.875564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.390 [2024-07-13 01:01:02.875578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.390 [2024-07-13 01:01:02.875584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.390 [2024-07-13 01:01:02.875590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.390 [2024-07-13 01:01:02.875603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.390 qpair failed and we were unable to recover it. 00:35:51.390 [2024-07-13 01:01:02.885547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.390 [2024-07-13 01:01:02.885603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.390 [2024-07-13 01:01:02.885617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.390 [2024-07-13 01:01:02.885624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.390 [2024-07-13 01:01:02.885632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.390 [2024-07-13 01:01:02.885646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.390 qpair failed and we were unable to recover it. 00:35:51.390 [2024-07-13 01:01:02.895494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.390 [2024-07-13 01:01:02.895546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.390 [2024-07-13 01:01:02.895561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.390 [2024-07-13 01:01:02.895567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.390 [2024-07-13 01:01:02.895573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.390 [2024-07-13 01:01:02.895586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.390 qpair failed and we were unable to recover it. 00:35:51.390 [2024-07-13 01:01:02.905541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.390 [2024-07-13 01:01:02.905646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.390 [2024-07-13 01:01:02.905661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.390 [2024-07-13 01:01:02.905667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.390 [2024-07-13 01:01:02.905673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.390 [2024-07-13 01:01:02.905686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.390 qpair failed and we were unable to recover it. 00:35:51.390 [2024-07-13 01:01:02.915657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.390 [2024-07-13 01:01:02.915713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.390 [2024-07-13 01:01:02.915727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.390 [2024-07-13 01:01:02.915734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.390 [2024-07-13 01:01:02.915740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.390 [2024-07-13 01:01:02.915753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.390 qpair failed and we were unable to recover it. 00:35:51.390 [2024-07-13 01:01:02.925705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.390 [2024-07-13 01:01:02.925771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.390 [2024-07-13 01:01:02.925785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.390 [2024-07-13 01:01:02.925792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.390 [2024-07-13 01:01:02.925797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.390 [2024-07-13 01:01:02.925812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.390 qpair failed and we were unable to recover it. 00:35:51.390 [2024-07-13 01:01:02.935628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.390 [2024-07-13 01:01:02.935689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.390 [2024-07-13 01:01:02.935703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.390 [2024-07-13 01:01:02.935710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.390 [2024-07-13 01:01:02.935716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.390 [2024-07-13 01:01:02.935729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.390 qpair failed and we were unable to recover it. 00:35:51.390 [2024-07-13 01:01:02.945647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.390 [2024-07-13 01:01:02.945707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.390 [2024-07-13 01:01:02.945722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.390 [2024-07-13 01:01:02.945729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.390 [2024-07-13 01:01:02.945735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.390 [2024-07-13 01:01:02.945748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.390 qpair failed and we were unable to recover it. 00:35:51.649 [2024-07-13 01:01:02.955744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.649 [2024-07-13 01:01:02.955798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.649 [2024-07-13 01:01:02.955813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.649 [2024-07-13 01:01:02.955820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.649 [2024-07-13 01:01:02.955826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.649 [2024-07-13 01:01:02.955840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.649 qpair failed and we were unable to recover it. 00:35:51.649 [2024-07-13 01:01:02.965708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.649 [2024-07-13 01:01:02.965768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.649 [2024-07-13 01:01:02.965783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.649 [2024-07-13 01:01:02.965790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.649 [2024-07-13 01:01:02.965795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.649 [2024-07-13 01:01:02.965809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.649 qpair failed and we were unable to recover it. 00:35:51.649 [2024-07-13 01:01:02.975798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.649 [2024-07-13 01:01:02.975853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.649 [2024-07-13 01:01:02.975868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.649 [2024-07-13 01:01:02.975879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.649 [2024-07-13 01:01:02.975884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.649 [2024-07-13 01:01:02.975898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.649 qpair failed and we were unable to recover it. 00:35:51.649 [2024-07-13 01:01:02.985835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.649 [2024-07-13 01:01:02.985884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.649 [2024-07-13 01:01:02.985899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.649 [2024-07-13 01:01:02.985906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.649 [2024-07-13 01:01:02.985912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.649 [2024-07-13 01:01:02.985926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.649 qpair failed and we were unable to recover it. 00:35:51.649 [2024-07-13 01:01:02.995804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.649 [2024-07-13 01:01:02.995863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.649 [2024-07-13 01:01:02.995878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.649 [2024-07-13 01:01:02.995885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.649 [2024-07-13 01:01:02.995891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.649 [2024-07-13 01:01:02.995904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.649 qpair failed and we were unable to recover it. 00:35:51.649 [2024-07-13 01:01:03.005884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.649 [2024-07-13 01:01:03.005942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.649 [2024-07-13 01:01:03.005958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.649 [2024-07-13 01:01:03.005965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.649 [2024-07-13 01:01:03.005971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.649 [2024-07-13 01:01:03.005984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.649 qpair failed and we were unable to recover it. 00:35:51.649 [2024-07-13 01:01:03.015843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.649 [2024-07-13 01:01:03.015910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.649 [2024-07-13 01:01:03.015924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.649 [2024-07-13 01:01:03.015931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.649 [2024-07-13 01:01:03.015937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.649 [2024-07-13 01:01:03.015950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.649 qpair failed and we were unable to recover it. 00:35:51.649 [2024-07-13 01:01:03.025885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.649 [2024-07-13 01:01:03.025938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.649 [2024-07-13 01:01:03.025952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.649 [2024-07-13 01:01:03.025959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.650 [2024-07-13 01:01:03.025965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.650 [2024-07-13 01:01:03.025978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.650 qpair failed and we were unable to recover it. 00:35:51.650 [2024-07-13 01:01:03.036050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.650 [2024-07-13 01:01:03.036136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.650 [2024-07-13 01:01:03.036150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.650 [2024-07-13 01:01:03.036157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.650 [2024-07-13 01:01:03.036162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.650 [2024-07-13 01:01:03.036175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.650 qpair failed and we were unable to recover it. 00:35:51.650 [2024-07-13 01:01:03.045943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.650 [2024-07-13 01:01:03.045995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.650 [2024-07-13 01:01:03.046010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.650 [2024-07-13 01:01:03.046016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.650 [2024-07-13 01:01:03.046022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.650 [2024-07-13 01:01:03.046036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.650 qpair failed and we were unable to recover it. 00:35:51.650 [2024-07-13 01:01:03.055966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.650 [2024-07-13 01:01:03.056016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.650 [2024-07-13 01:01:03.056031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.650 [2024-07-13 01:01:03.056037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.650 [2024-07-13 01:01:03.056043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.650 [2024-07-13 01:01:03.056058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.650 qpair failed and we were unable to recover it. 00:35:51.650 [2024-07-13 01:01:03.066052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.650 [2024-07-13 01:01:03.066107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.650 [2024-07-13 01:01:03.066122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.650 [2024-07-13 01:01:03.066132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.650 [2024-07-13 01:01:03.066138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.650 [2024-07-13 01:01:03.066151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.650 qpair failed and we were unable to recover it. 00:35:51.650 [2024-07-13 01:01:03.076016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.650 [2024-07-13 01:01:03.076076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.650 [2024-07-13 01:01:03.076091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.650 [2024-07-13 01:01:03.076098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.650 [2024-07-13 01:01:03.076104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.650 [2024-07-13 01:01:03.076117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.650 qpair failed and we were unable to recover it. 00:35:51.650 [2024-07-13 01:01:03.086124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.650 [2024-07-13 01:01:03.086177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.650 [2024-07-13 01:01:03.086192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.650 [2024-07-13 01:01:03.086198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.650 [2024-07-13 01:01:03.086204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.650 [2024-07-13 01:01:03.086218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.650 qpair failed and we were unable to recover it. 00:35:51.650 [2024-07-13 01:01:03.096155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.650 [2024-07-13 01:01:03.096209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.650 [2024-07-13 01:01:03.096228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.650 [2024-07-13 01:01:03.096235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.650 [2024-07-13 01:01:03.096240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.650 [2024-07-13 01:01:03.096254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.650 qpair failed and we were unable to recover it. 00:35:51.650 [2024-07-13 01:01:03.106106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.650 [2024-07-13 01:01:03.106158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.650 [2024-07-13 01:01:03.106173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.650 [2024-07-13 01:01:03.106179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.650 [2024-07-13 01:01:03.106185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.650 [2024-07-13 01:01:03.106198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.650 qpair failed and we were unable to recover it. 00:35:51.650 [2024-07-13 01:01:03.116216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.650 [2024-07-13 01:01:03.116274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.650 [2024-07-13 01:01:03.116289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.650 [2024-07-13 01:01:03.116296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.650 [2024-07-13 01:01:03.116302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.650 [2024-07-13 01:01:03.116315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.650 qpair failed and we were unable to recover it. 00:35:51.650 [2024-07-13 01:01:03.126242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.650 [2024-07-13 01:01:03.126300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.650 [2024-07-13 01:01:03.126314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.650 [2024-07-13 01:01:03.126321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.650 [2024-07-13 01:01:03.126327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.650 [2024-07-13 01:01:03.126340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.650 qpair failed and we were unable to recover it. 00:35:51.650 [2024-07-13 01:01:03.136259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.650 [2024-07-13 01:01:03.136310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.650 [2024-07-13 01:01:03.136324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.650 [2024-07-13 01:01:03.136331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.650 [2024-07-13 01:01:03.136337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.650 [2024-07-13 01:01:03.136350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.650 qpair failed and we were unable to recover it. 00:35:51.650 [2024-07-13 01:01:03.146238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.650 [2024-07-13 01:01:03.146287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.650 [2024-07-13 01:01:03.146301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.650 [2024-07-13 01:01:03.146308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.650 [2024-07-13 01:01:03.146313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.650 [2024-07-13 01:01:03.146327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.650 qpair failed and we were unable to recover it. 00:35:51.650 [2024-07-13 01:01:03.156310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.650 [2024-07-13 01:01:03.156370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.650 [2024-07-13 01:01:03.156385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.650 [2024-07-13 01:01:03.156395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.650 [2024-07-13 01:01:03.156400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.650 [2024-07-13 01:01:03.156413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.650 qpair failed and we were unable to recover it. 00:35:51.650 [2024-07-13 01:01:03.166308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.650 [2024-07-13 01:01:03.166363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.650 [2024-07-13 01:01:03.166379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.650 [2024-07-13 01:01:03.166385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.650 [2024-07-13 01:01:03.166391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.650 [2024-07-13 01:01:03.166404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.651 qpair failed and we were unable to recover it. 00:35:51.651 [2024-07-13 01:01:03.176304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.651 [2024-07-13 01:01:03.176361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.651 [2024-07-13 01:01:03.176375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.651 [2024-07-13 01:01:03.176381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.651 [2024-07-13 01:01:03.176386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.651 [2024-07-13 01:01:03.176399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.651 qpair failed and we were unable to recover it. 00:35:51.651 [2024-07-13 01:01:03.186407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.651 [2024-07-13 01:01:03.186460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.651 [2024-07-13 01:01:03.186474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.651 [2024-07-13 01:01:03.186481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.651 [2024-07-13 01:01:03.186487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.651 [2024-07-13 01:01:03.186500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.651 qpair failed and we were unable to recover it. 00:35:51.651 [2024-07-13 01:01:03.196371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.651 [2024-07-13 01:01:03.196422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.651 [2024-07-13 01:01:03.196436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.651 [2024-07-13 01:01:03.196443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.651 [2024-07-13 01:01:03.196449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.651 [2024-07-13 01:01:03.196462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.651 qpair failed and we were unable to recover it. 00:35:51.651 [2024-07-13 01:01:03.206431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.651 [2024-07-13 01:01:03.206497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.651 [2024-07-13 01:01:03.206511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.651 [2024-07-13 01:01:03.206518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.651 [2024-07-13 01:01:03.206524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.651 [2024-07-13 01:01:03.206537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.651 qpair failed and we were unable to recover it. 00:35:51.910 [2024-07-13 01:01:03.216406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.910 [2024-07-13 01:01:03.216459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.910 [2024-07-13 01:01:03.216474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.910 [2024-07-13 01:01:03.216481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.910 [2024-07-13 01:01:03.216487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.910 [2024-07-13 01:01:03.216500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.910 qpair failed and we were unable to recover it. 00:35:51.910 [2024-07-13 01:01:03.226585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.910 [2024-07-13 01:01:03.226751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.910 [2024-07-13 01:01:03.226766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.910 [2024-07-13 01:01:03.226773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.910 [2024-07-13 01:01:03.226778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.910 [2024-07-13 01:01:03.226792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.910 qpair failed and we were unable to recover it. 00:35:51.910 [2024-07-13 01:01:03.236582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.910 [2024-07-13 01:01:03.236656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.910 [2024-07-13 01:01:03.236670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.910 [2024-07-13 01:01:03.236677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.910 [2024-07-13 01:01:03.236682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.910 [2024-07-13 01:01:03.236695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.910 qpair failed and we were unable to recover it. 00:35:51.910 [2024-07-13 01:01:03.246595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.910 [2024-07-13 01:01:03.246652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.910 [2024-07-13 01:01:03.246670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.910 [2024-07-13 01:01:03.246676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.910 [2024-07-13 01:01:03.246681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.910 [2024-07-13 01:01:03.246695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.910 qpair failed and we were unable to recover it. 00:35:51.910 [2024-07-13 01:01:03.256561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.910 [2024-07-13 01:01:03.256620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.910 [2024-07-13 01:01:03.256635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.910 [2024-07-13 01:01:03.256642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.910 [2024-07-13 01:01:03.256647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.910 [2024-07-13 01:01:03.256661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.910 qpair failed and we were unable to recover it. 00:35:51.910 [2024-07-13 01:01:03.266611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.910 [2024-07-13 01:01:03.266674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.910 [2024-07-13 01:01:03.266689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.910 [2024-07-13 01:01:03.266696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.910 [2024-07-13 01:01:03.266702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.910 [2024-07-13 01:01:03.266716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.910 qpair failed and we were unable to recover it. 00:35:51.910 [2024-07-13 01:01:03.276628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.910 [2024-07-13 01:01:03.276686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.910 [2024-07-13 01:01:03.276702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.910 [2024-07-13 01:01:03.276709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.910 [2024-07-13 01:01:03.276714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.910 [2024-07-13 01:01:03.276728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.910 qpair failed and we were unable to recover it. 00:35:51.910 [2024-07-13 01:01:03.286676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.910 [2024-07-13 01:01:03.286728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.910 [2024-07-13 01:01:03.286743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.910 [2024-07-13 01:01:03.286749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.910 [2024-07-13 01:01:03.286755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.910 [2024-07-13 01:01:03.286772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.910 qpair failed and we were unable to recover it. 00:35:51.910 [2024-07-13 01:01:03.296704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.910 [2024-07-13 01:01:03.296761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.910 [2024-07-13 01:01:03.296777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.910 [2024-07-13 01:01:03.296784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.910 [2024-07-13 01:01:03.296789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.910 [2024-07-13 01:01:03.296803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.910 qpair failed and we were unable to recover it. 00:35:51.910 [2024-07-13 01:01:03.306654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.910 [2024-07-13 01:01:03.306709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.910 [2024-07-13 01:01:03.306723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.910 [2024-07-13 01:01:03.306730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.910 [2024-07-13 01:01:03.306736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.910 [2024-07-13 01:01:03.306750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.910 qpair failed and we were unable to recover it. 00:35:51.910 [2024-07-13 01:01:03.316684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.910 [2024-07-13 01:01:03.316744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.910 [2024-07-13 01:01:03.316759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.910 [2024-07-13 01:01:03.316766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.911 [2024-07-13 01:01:03.316771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.911 [2024-07-13 01:01:03.316785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.911 qpair failed and we were unable to recover it. 00:35:51.911 [2024-07-13 01:01:03.326793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.911 [2024-07-13 01:01:03.326850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.911 [2024-07-13 01:01:03.326864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.911 [2024-07-13 01:01:03.326871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.911 [2024-07-13 01:01:03.326877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.911 [2024-07-13 01:01:03.326890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.911 qpair failed and we were unable to recover it. 00:35:51.911 [2024-07-13 01:01:03.336736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.911 [2024-07-13 01:01:03.336785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.911 [2024-07-13 01:01:03.336803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.911 [2024-07-13 01:01:03.336810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.911 [2024-07-13 01:01:03.336816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.911 [2024-07-13 01:01:03.336829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.911 qpair failed and we were unable to recover it. 00:35:51.911 [2024-07-13 01:01:03.346762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.911 [2024-07-13 01:01:03.346814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.911 [2024-07-13 01:01:03.346828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.911 [2024-07-13 01:01:03.346835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.911 [2024-07-13 01:01:03.346841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.911 [2024-07-13 01:01:03.346854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.911 qpair failed and we were unable to recover it. 00:35:51.911 [2024-07-13 01:01:03.356790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.911 [2024-07-13 01:01:03.356850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.911 [2024-07-13 01:01:03.356865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.911 [2024-07-13 01:01:03.356871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.911 [2024-07-13 01:01:03.356877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.911 [2024-07-13 01:01:03.356890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.911 qpair failed and we were unable to recover it. 00:35:51.911 [2024-07-13 01:01:03.366930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.911 [2024-07-13 01:01:03.366984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.911 [2024-07-13 01:01:03.366999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.911 [2024-07-13 01:01:03.367005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.911 [2024-07-13 01:01:03.367011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.911 [2024-07-13 01:01:03.367024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.911 qpair failed and we were unable to recover it. 00:35:51.911 [2024-07-13 01:01:03.376915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.911 [2024-07-13 01:01:03.376974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.911 [2024-07-13 01:01:03.376988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.911 [2024-07-13 01:01:03.376995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.911 [2024-07-13 01:01:03.377001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.911 [2024-07-13 01:01:03.377020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.911 qpair failed and we were unable to recover it. 00:35:51.911 [2024-07-13 01:01:03.386950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.911 [2024-07-13 01:01:03.386999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.911 [2024-07-13 01:01:03.387014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.911 [2024-07-13 01:01:03.387020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.911 [2024-07-13 01:01:03.387026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.911 [2024-07-13 01:01:03.387039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.911 qpair failed and we were unable to recover it. 00:35:51.911 [2024-07-13 01:01:03.396988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.911 [2024-07-13 01:01:03.397042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.911 [2024-07-13 01:01:03.397057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.911 [2024-07-13 01:01:03.397063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.911 [2024-07-13 01:01:03.397069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.911 [2024-07-13 01:01:03.397082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.911 qpair failed and we were unable to recover it. 00:35:51.911 [2024-07-13 01:01:03.407042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.911 [2024-07-13 01:01:03.407098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.911 [2024-07-13 01:01:03.407112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.911 [2024-07-13 01:01:03.407119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.911 [2024-07-13 01:01:03.407125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.911 [2024-07-13 01:01:03.407139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.911 qpair failed and we were unable to recover it. 00:35:51.911 [2024-07-13 01:01:03.417071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.911 [2024-07-13 01:01:03.417128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.911 [2024-07-13 01:01:03.417143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.911 [2024-07-13 01:01:03.417149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.911 [2024-07-13 01:01:03.417155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.911 [2024-07-13 01:01:03.417168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.911 qpair failed and we were unable to recover it. 00:35:51.911 [2024-07-13 01:01:03.427057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.911 [2024-07-13 01:01:03.427110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.911 [2024-07-13 01:01:03.427128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.911 [2024-07-13 01:01:03.427134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.911 [2024-07-13 01:01:03.427140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.911 [2024-07-13 01:01:03.427153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.911 qpair failed and we were unable to recover it. 00:35:51.911 [2024-07-13 01:01:03.437100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.911 [2024-07-13 01:01:03.437169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.911 [2024-07-13 01:01:03.437184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.911 [2024-07-13 01:01:03.437190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.911 [2024-07-13 01:01:03.437196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.911 [2024-07-13 01:01:03.437210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.911 qpair failed and we were unable to recover it. 00:35:51.911 [2024-07-13 01:01:03.447135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.911 [2024-07-13 01:01:03.447189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.911 [2024-07-13 01:01:03.447203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.911 [2024-07-13 01:01:03.447209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.911 [2024-07-13 01:01:03.447215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.911 [2024-07-13 01:01:03.447232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.911 qpair failed and we were unable to recover it. 00:35:51.911 [2024-07-13 01:01:03.457146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.911 [2024-07-13 01:01:03.457202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.911 [2024-07-13 01:01:03.457216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.911 [2024-07-13 01:01:03.457223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.911 [2024-07-13 01:01:03.457232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.911 [2024-07-13 01:01:03.457245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.911 qpair failed and we were unable to recover it. 00:35:51.911 [2024-07-13 01:01:03.467106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:51.912 [2024-07-13 01:01:03.467160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:51.912 [2024-07-13 01:01:03.467175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:51.912 [2024-07-13 01:01:03.467181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:51.912 [2024-07-13 01:01:03.467187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:51.912 [2024-07-13 01:01:03.467203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:51.912 qpair failed and we were unable to recover it. 00:35:52.171 [2024-07-13 01:01:03.477203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.171 [2024-07-13 01:01:03.477264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.171 [2024-07-13 01:01:03.477279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.171 [2024-07-13 01:01:03.477285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.171 [2024-07-13 01:01:03.477291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.171 [2024-07-13 01:01:03.477304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.171 qpair failed and we were unable to recover it. 00:35:52.171 [2024-07-13 01:01:03.487236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.171 [2024-07-13 01:01:03.487290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.171 [2024-07-13 01:01:03.487304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.171 [2024-07-13 01:01:03.487310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.171 [2024-07-13 01:01:03.487316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.171 [2024-07-13 01:01:03.487329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.171 qpair failed and we were unable to recover it. 00:35:52.171 [2024-07-13 01:01:03.497258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.171 [2024-07-13 01:01:03.497313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.171 [2024-07-13 01:01:03.497328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.171 [2024-07-13 01:01:03.497335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.171 [2024-07-13 01:01:03.497340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.171 [2024-07-13 01:01:03.497354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.171 qpair failed and we were unable to recover it. 00:35:52.171 [2024-07-13 01:01:03.507293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.171 [2024-07-13 01:01:03.507343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.171 [2024-07-13 01:01:03.507359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.171 [2024-07-13 01:01:03.507366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.171 [2024-07-13 01:01:03.507372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.171 [2024-07-13 01:01:03.507385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.171 qpair failed and we were unable to recover it. 00:35:52.171 [2024-07-13 01:01:03.517323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.171 [2024-07-13 01:01:03.517378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.171 [2024-07-13 01:01:03.517395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.172 [2024-07-13 01:01:03.517401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.172 [2024-07-13 01:01:03.517407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.172 [2024-07-13 01:01:03.517421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.172 qpair failed and we were unable to recover it. 00:35:52.172 [2024-07-13 01:01:03.527373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.172 [2024-07-13 01:01:03.527427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.172 [2024-07-13 01:01:03.527441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.172 [2024-07-13 01:01:03.527448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.172 [2024-07-13 01:01:03.527453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.172 [2024-07-13 01:01:03.527467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.172 qpair failed and we were unable to recover it. 00:35:52.172 [2024-07-13 01:01:03.537371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.172 [2024-07-13 01:01:03.537423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.172 [2024-07-13 01:01:03.537437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.172 [2024-07-13 01:01:03.537444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.172 [2024-07-13 01:01:03.537449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.172 [2024-07-13 01:01:03.537463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.172 qpair failed and we were unable to recover it. 00:35:52.172 [2024-07-13 01:01:03.547415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.172 [2024-07-13 01:01:03.547467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.172 [2024-07-13 01:01:03.547481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.172 [2024-07-13 01:01:03.547487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.172 [2024-07-13 01:01:03.547493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.172 [2024-07-13 01:01:03.547506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.172 qpair failed and we were unable to recover it. 00:35:52.172 [2024-07-13 01:01:03.557479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.172 [2024-07-13 01:01:03.557587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.172 [2024-07-13 01:01:03.557601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.172 [2024-07-13 01:01:03.557607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.172 [2024-07-13 01:01:03.557616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.172 [2024-07-13 01:01:03.557629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.172 qpair failed and we were unable to recover it. 00:35:52.172 [2024-07-13 01:01:03.567468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.172 [2024-07-13 01:01:03.567520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.172 [2024-07-13 01:01:03.567535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.172 [2024-07-13 01:01:03.567541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.172 [2024-07-13 01:01:03.567547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.172 [2024-07-13 01:01:03.567561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.172 qpair failed and we were unable to recover it. 00:35:52.172 [2024-07-13 01:01:03.577536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.172 [2024-07-13 01:01:03.577642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.172 [2024-07-13 01:01:03.577656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.172 [2024-07-13 01:01:03.577663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.172 [2024-07-13 01:01:03.577668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.172 [2024-07-13 01:01:03.577681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.172 qpair failed and we were unable to recover it. 00:35:52.172 [2024-07-13 01:01:03.587548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.172 [2024-07-13 01:01:03.587605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.172 [2024-07-13 01:01:03.587619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.172 [2024-07-13 01:01:03.587626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.172 [2024-07-13 01:01:03.587632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.172 [2024-07-13 01:01:03.587645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.172 qpair failed and we were unable to recover it. 00:35:52.172 [2024-07-13 01:01:03.597583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.172 [2024-07-13 01:01:03.597637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.172 [2024-07-13 01:01:03.597652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.172 [2024-07-13 01:01:03.597658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.172 [2024-07-13 01:01:03.597664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.172 [2024-07-13 01:01:03.597677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.172 qpair failed and we were unable to recover it. 00:35:52.172 [2024-07-13 01:01:03.607584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.172 [2024-07-13 01:01:03.607639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.172 [2024-07-13 01:01:03.607653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.172 [2024-07-13 01:01:03.607659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.172 [2024-07-13 01:01:03.607665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.172 [2024-07-13 01:01:03.607678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.172 qpair failed and we were unable to recover it. 00:35:52.172 [2024-07-13 01:01:03.617608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.172 [2024-07-13 01:01:03.617659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.172 [2024-07-13 01:01:03.617673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.172 [2024-07-13 01:01:03.617680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.172 [2024-07-13 01:01:03.617685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.172 [2024-07-13 01:01:03.617699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.172 qpair failed and we were unable to recover it. 00:35:52.172 [2024-07-13 01:01:03.627638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.172 [2024-07-13 01:01:03.627689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.172 [2024-07-13 01:01:03.627704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.172 [2024-07-13 01:01:03.627710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.172 [2024-07-13 01:01:03.627716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.172 [2024-07-13 01:01:03.627729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.172 qpair failed and we were unable to recover it. 00:35:52.172 [2024-07-13 01:01:03.637725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.172 [2024-07-13 01:01:03.637774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.172 [2024-07-13 01:01:03.637789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.172 [2024-07-13 01:01:03.637795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.172 [2024-07-13 01:01:03.637801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.172 [2024-07-13 01:01:03.637814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.172 qpair failed and we were unable to recover it. 00:35:52.172 [2024-07-13 01:01:03.647708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.172 [2024-07-13 01:01:03.647761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.172 [2024-07-13 01:01:03.647775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.172 [2024-07-13 01:01:03.647781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.172 [2024-07-13 01:01:03.647790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.172 [2024-07-13 01:01:03.647803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.172 qpair failed and we were unable to recover it. 00:35:52.172 [2024-07-13 01:01:03.657707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.172 [2024-07-13 01:01:03.657764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.172 [2024-07-13 01:01:03.657779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.172 [2024-07-13 01:01:03.657785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.173 [2024-07-13 01:01:03.657790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.173 [2024-07-13 01:01:03.657803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.173 qpair failed and we were unable to recover it. 00:35:52.173 [2024-07-13 01:01:03.667764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.173 [2024-07-13 01:01:03.667818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.173 [2024-07-13 01:01:03.667832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.173 [2024-07-13 01:01:03.667839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.173 [2024-07-13 01:01:03.667845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.173 [2024-07-13 01:01:03.667858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.173 qpair failed and we were unable to recover it. 00:35:52.173 [2024-07-13 01:01:03.677793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.173 [2024-07-13 01:01:03.677851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.173 [2024-07-13 01:01:03.677865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.173 [2024-07-13 01:01:03.677872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.173 [2024-07-13 01:01:03.677878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.173 [2024-07-13 01:01:03.677891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.173 qpair failed and we were unable to recover it. 00:35:52.173 [2024-07-13 01:01:03.687819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.173 [2024-07-13 01:01:03.687874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.173 [2024-07-13 01:01:03.687891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.173 [2024-07-13 01:01:03.687897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.173 [2024-07-13 01:01:03.687903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.173 [2024-07-13 01:01:03.687917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.173 qpair failed and we were unable to recover it. 00:35:52.173 [2024-07-13 01:01:03.697830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.173 [2024-07-13 01:01:03.697923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.173 [2024-07-13 01:01:03.697938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.173 [2024-07-13 01:01:03.697944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.173 [2024-07-13 01:01:03.697950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.173 [2024-07-13 01:01:03.697963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.173 qpair failed and we were unable to recover it. 00:35:52.173 [2024-07-13 01:01:03.707868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.173 [2024-07-13 01:01:03.707920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.173 [2024-07-13 01:01:03.707935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.173 [2024-07-13 01:01:03.707942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.173 [2024-07-13 01:01:03.707948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.173 [2024-07-13 01:01:03.707962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.173 qpair failed and we were unable to recover it. 00:35:52.173 [2024-07-13 01:01:03.717894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.173 [2024-07-13 01:01:03.717947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.173 [2024-07-13 01:01:03.717961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.173 [2024-07-13 01:01:03.717968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.173 [2024-07-13 01:01:03.717973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.173 [2024-07-13 01:01:03.717987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.173 qpair failed and we were unable to recover it. 00:35:52.173 [2024-07-13 01:01:03.727947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.173 [2024-07-13 01:01:03.728003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.173 [2024-07-13 01:01:03.728017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.173 [2024-07-13 01:01:03.728024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.173 [2024-07-13 01:01:03.728029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.173 [2024-07-13 01:01:03.728042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.173 qpair failed and we were unable to recover it. 00:35:52.433 [2024-07-13 01:01:03.737939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.433 [2024-07-13 01:01:03.737999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.433 [2024-07-13 01:01:03.738014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.433 [2024-07-13 01:01:03.738025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.433 [2024-07-13 01:01:03.738031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.433 [2024-07-13 01:01:03.738044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.433 qpair failed and we were unable to recover it. 00:35:52.433 [2024-07-13 01:01:03.747980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.434 [2024-07-13 01:01:03.748032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.434 [2024-07-13 01:01:03.748047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.434 [2024-07-13 01:01:03.748054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.434 [2024-07-13 01:01:03.748059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.434 [2024-07-13 01:01:03.748073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-07-13 01:01:03.758013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.434 [2024-07-13 01:01:03.758064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.434 [2024-07-13 01:01:03.758079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.434 [2024-07-13 01:01:03.758086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.434 [2024-07-13 01:01:03.758092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.434 [2024-07-13 01:01:03.758106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-07-13 01:01:03.768056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.434 [2024-07-13 01:01:03.768110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.434 [2024-07-13 01:01:03.768124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.434 [2024-07-13 01:01:03.768131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.434 [2024-07-13 01:01:03.768137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.434 [2024-07-13 01:01:03.768150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-07-13 01:01:03.778079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.434 [2024-07-13 01:01:03.778136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.434 [2024-07-13 01:01:03.778151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.434 [2024-07-13 01:01:03.778158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.434 [2024-07-13 01:01:03.778163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.434 [2024-07-13 01:01:03.778177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-07-13 01:01:03.788106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.434 [2024-07-13 01:01:03.788158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.434 [2024-07-13 01:01:03.788173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.434 [2024-07-13 01:01:03.788179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.434 [2024-07-13 01:01:03.788185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.434 [2024-07-13 01:01:03.788198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-07-13 01:01:03.798142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.434 [2024-07-13 01:01:03.798196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.434 [2024-07-13 01:01:03.798210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.434 [2024-07-13 01:01:03.798217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.434 [2024-07-13 01:01:03.798223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.434 [2024-07-13 01:01:03.798240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-07-13 01:01:03.808169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.434 [2024-07-13 01:01:03.808223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.434 [2024-07-13 01:01:03.808241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.434 [2024-07-13 01:01:03.808247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.434 [2024-07-13 01:01:03.808253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.434 [2024-07-13 01:01:03.808267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-07-13 01:01:03.818187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.434 [2024-07-13 01:01:03.818257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.434 [2024-07-13 01:01:03.818272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.434 [2024-07-13 01:01:03.818279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.434 [2024-07-13 01:01:03.818284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.434 [2024-07-13 01:01:03.818297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-07-13 01:01:03.828234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.434 [2024-07-13 01:01:03.828282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.434 [2024-07-13 01:01:03.828296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.434 [2024-07-13 01:01:03.828305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.434 [2024-07-13 01:01:03.828311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.434 [2024-07-13 01:01:03.828325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-07-13 01:01:03.838273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.434 [2024-07-13 01:01:03.838322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.434 [2024-07-13 01:01:03.838336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.434 [2024-07-13 01:01:03.838343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.434 [2024-07-13 01:01:03.838348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.434 [2024-07-13 01:01:03.838362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-07-13 01:01:03.848279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.434 [2024-07-13 01:01:03.848333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.434 [2024-07-13 01:01:03.848347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.434 [2024-07-13 01:01:03.848354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.434 [2024-07-13 01:01:03.848360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.434 [2024-07-13 01:01:03.848373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-07-13 01:01:03.858274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.434 [2024-07-13 01:01:03.858328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.434 [2024-07-13 01:01:03.858342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.434 [2024-07-13 01:01:03.858348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.434 [2024-07-13 01:01:03.858354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.434 [2024-07-13 01:01:03.858367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-07-13 01:01:03.868348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.434 [2024-07-13 01:01:03.868404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.434 [2024-07-13 01:01:03.868419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.434 [2024-07-13 01:01:03.868425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.434 [2024-07-13 01:01:03.868431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.434 [2024-07-13 01:01:03.868444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-07-13 01:01:03.878377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.434 [2024-07-13 01:01:03.878429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.434 [2024-07-13 01:01:03.878444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.434 [2024-07-13 01:01:03.878450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.434 [2024-07-13 01:01:03.878456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.434 [2024-07-13 01:01:03.878469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.434 qpair failed and we were unable to recover it. 00:35:52.434 [2024-07-13 01:01:03.888452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.434 [2024-07-13 01:01:03.888504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.435 [2024-07-13 01:01:03.888518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.435 [2024-07-13 01:01:03.888525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.435 [2024-07-13 01:01:03.888531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.435 [2024-07-13 01:01:03.888543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-07-13 01:01:03.898388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.435 [2024-07-13 01:01:03.898440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.435 [2024-07-13 01:01:03.898454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.435 [2024-07-13 01:01:03.898460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.435 [2024-07-13 01:01:03.898466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.435 [2024-07-13 01:01:03.898479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-07-13 01:01:03.908455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.435 [2024-07-13 01:01:03.908513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.435 [2024-07-13 01:01:03.908529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.435 [2024-07-13 01:01:03.908536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.435 [2024-07-13 01:01:03.908542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.435 [2024-07-13 01:01:03.908555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-07-13 01:01:03.918495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.435 [2024-07-13 01:01:03.918546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.435 [2024-07-13 01:01:03.918561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.435 [2024-07-13 01:01:03.918571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.435 [2024-07-13 01:01:03.918577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.435 [2024-07-13 01:01:03.918591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-07-13 01:01:03.928538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.435 [2024-07-13 01:01:03.928617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.435 [2024-07-13 01:01:03.928631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.435 [2024-07-13 01:01:03.928637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.435 [2024-07-13 01:01:03.928644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.435 [2024-07-13 01:01:03.928658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-07-13 01:01:03.938602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.435 [2024-07-13 01:01:03.938658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.435 [2024-07-13 01:01:03.938673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.435 [2024-07-13 01:01:03.938680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.435 [2024-07-13 01:01:03.938686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.435 [2024-07-13 01:01:03.938699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-07-13 01:01:03.948611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.435 [2024-07-13 01:01:03.948660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.435 [2024-07-13 01:01:03.948674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.435 [2024-07-13 01:01:03.948681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.435 [2024-07-13 01:01:03.948687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.435 [2024-07-13 01:01:03.948700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-07-13 01:01:03.958615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.435 [2024-07-13 01:01:03.958668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.435 [2024-07-13 01:01:03.958682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.435 [2024-07-13 01:01:03.958689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.435 [2024-07-13 01:01:03.958695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.435 [2024-07-13 01:01:03.958708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-07-13 01:01:03.968647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.435 [2024-07-13 01:01:03.968724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.435 [2024-07-13 01:01:03.968739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.435 [2024-07-13 01:01:03.968745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.435 [2024-07-13 01:01:03.968751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.435 [2024-07-13 01:01:03.968764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-07-13 01:01:03.978667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.435 [2024-07-13 01:01:03.978722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.435 [2024-07-13 01:01:03.978737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.435 [2024-07-13 01:01:03.978743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.435 [2024-07-13 01:01:03.978749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.435 [2024-07-13 01:01:03.978762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.435 [2024-07-13 01:01:03.988707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.435 [2024-07-13 01:01:03.988760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.435 [2024-07-13 01:01:03.988774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.435 [2024-07-13 01:01:03.988781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.435 [2024-07-13 01:01:03.988787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.435 [2024-07-13 01:01:03.988799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.435 qpair failed and we were unable to recover it. 00:35:52.695 [2024-07-13 01:01:03.998676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.695 [2024-07-13 01:01:03.998736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.695 [2024-07-13 01:01:03.998750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.695 [2024-07-13 01:01:03.998757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.695 [2024-07-13 01:01:03.998763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.695 [2024-07-13 01:01:03.998777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.695 qpair failed and we were unable to recover it. 00:35:52.695 [2024-07-13 01:01:04.008703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.695 [2024-07-13 01:01:04.008761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.695 [2024-07-13 01:01:04.008780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.695 [2024-07-13 01:01:04.008786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.695 [2024-07-13 01:01:04.008792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.696 [2024-07-13 01:01:04.008806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.696 qpair failed and we were unable to recover it. 00:35:52.696 [2024-07-13 01:01:04.018781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.696 [2024-07-13 01:01:04.018842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.696 [2024-07-13 01:01:04.018856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.696 [2024-07-13 01:01:04.018863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.696 [2024-07-13 01:01:04.018868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.696 [2024-07-13 01:01:04.018882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.696 qpair failed and we were unable to recover it. 00:35:52.696 [2024-07-13 01:01:04.028820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.696 [2024-07-13 01:01:04.028874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.696 [2024-07-13 01:01:04.028888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.696 [2024-07-13 01:01:04.028895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.696 [2024-07-13 01:01:04.028900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.696 [2024-07-13 01:01:04.028914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.696 qpair failed and we were unable to recover it. 00:35:52.696 [2024-07-13 01:01:04.038844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.696 [2024-07-13 01:01:04.038897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.696 [2024-07-13 01:01:04.038912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.696 [2024-07-13 01:01:04.038919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.696 [2024-07-13 01:01:04.038924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.696 [2024-07-13 01:01:04.038937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.696 qpair failed and we were unable to recover it. 00:35:52.696 [2024-07-13 01:01:04.048910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.696 [2024-07-13 01:01:04.048963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.696 [2024-07-13 01:01:04.048977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.696 [2024-07-13 01:01:04.048983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.696 [2024-07-13 01:01:04.048989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.696 [2024-07-13 01:01:04.049002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.696 qpair failed and we were unable to recover it. 00:35:52.696 [2024-07-13 01:01:04.058933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.696 [2024-07-13 01:01:04.058995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.696 [2024-07-13 01:01:04.059010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.696 [2024-07-13 01:01:04.059017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.696 [2024-07-13 01:01:04.059022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.696 [2024-07-13 01:01:04.059035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.696 qpair failed and we were unable to recover it. 00:35:52.696 [2024-07-13 01:01:04.068926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.696 [2024-07-13 01:01:04.068981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.696 [2024-07-13 01:01:04.068995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.696 [2024-07-13 01:01:04.069002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.696 [2024-07-13 01:01:04.069007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.696 [2024-07-13 01:01:04.069021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.696 qpair failed and we were unable to recover it. 00:35:52.696 [2024-07-13 01:01:04.078955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.696 [2024-07-13 01:01:04.079013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.696 [2024-07-13 01:01:04.079027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.696 [2024-07-13 01:01:04.079034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.696 [2024-07-13 01:01:04.079039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.696 [2024-07-13 01:01:04.079052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.696 qpair failed and we were unable to recover it. 00:35:52.696 [2024-07-13 01:01:04.089023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.696 [2024-07-13 01:01:04.089093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.696 [2024-07-13 01:01:04.089107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.696 [2024-07-13 01:01:04.089113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.696 [2024-07-13 01:01:04.089119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.696 [2024-07-13 01:01:04.089132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.696 qpair failed and we were unable to recover it. 00:35:52.696 [2024-07-13 01:01:04.099084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.696 [2024-07-13 01:01:04.099141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.696 [2024-07-13 01:01:04.099159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.696 [2024-07-13 01:01:04.099165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.696 [2024-07-13 01:01:04.099170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.696 [2024-07-13 01:01:04.099184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.696 qpair failed and we were unable to recover it. 00:35:52.696 [2024-07-13 01:01:04.109053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.696 [2024-07-13 01:01:04.109102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.696 [2024-07-13 01:01:04.109117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.696 [2024-07-13 01:01:04.109123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.696 [2024-07-13 01:01:04.109129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.696 [2024-07-13 01:01:04.109142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.696 qpair failed and we were unable to recover it. 00:35:52.696 [2024-07-13 01:01:04.119072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.696 [2024-07-13 01:01:04.119126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.696 [2024-07-13 01:01:04.119141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.696 [2024-07-13 01:01:04.119147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.696 [2024-07-13 01:01:04.119153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.696 [2024-07-13 01:01:04.119166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.696 qpair failed and we were unable to recover it. 00:35:52.696 [2024-07-13 01:01:04.129145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.696 [2024-07-13 01:01:04.129200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.696 [2024-07-13 01:01:04.129214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.696 [2024-07-13 01:01:04.129221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.696 [2024-07-13 01:01:04.129230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.696 [2024-07-13 01:01:04.129243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.696 qpair failed and we were unable to recover it. 00:35:52.696 [2024-07-13 01:01:04.139142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.696 [2024-07-13 01:01:04.139195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.696 [2024-07-13 01:01:04.139210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.696 [2024-07-13 01:01:04.139216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.696 [2024-07-13 01:01:04.139222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.696 [2024-07-13 01:01:04.139246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.696 qpair failed and we were unable to recover it. 00:35:52.696 [2024-07-13 01:01:04.149173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.696 [2024-07-13 01:01:04.149229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.696 [2024-07-13 01:01:04.149244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.696 [2024-07-13 01:01:04.149250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.696 [2024-07-13 01:01:04.149256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.696 [2024-07-13 01:01:04.149269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.696 qpair failed and we were unable to recover it. 00:35:52.696 [2024-07-13 01:01:04.159215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.697 [2024-07-13 01:01:04.159276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.697 [2024-07-13 01:01:04.159290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.697 [2024-07-13 01:01:04.159297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.697 [2024-07-13 01:01:04.159303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.697 [2024-07-13 01:01:04.159316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.697 qpair failed and we were unable to recover it. 00:35:52.697 [2024-07-13 01:01:04.169228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.697 [2024-07-13 01:01:04.169280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.697 [2024-07-13 01:01:04.169295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.697 [2024-07-13 01:01:04.169301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.697 [2024-07-13 01:01:04.169307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.697 [2024-07-13 01:01:04.169320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.697 qpair failed and we were unable to recover it. 00:35:52.697 [2024-07-13 01:01:04.179180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.697 [2024-07-13 01:01:04.179235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.697 [2024-07-13 01:01:04.179249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.697 [2024-07-13 01:01:04.179256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.697 [2024-07-13 01:01:04.179261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.697 [2024-07-13 01:01:04.179275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.697 qpair failed and we were unable to recover it. 00:35:52.697 [2024-07-13 01:01:04.189282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.697 [2024-07-13 01:01:04.189332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.697 [2024-07-13 01:01:04.189350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.697 [2024-07-13 01:01:04.189357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.697 [2024-07-13 01:01:04.189362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.697 [2024-07-13 01:01:04.189375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.697 qpair failed and we were unable to recover it. 00:35:52.697 [2024-07-13 01:01:04.199307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.697 [2024-07-13 01:01:04.199360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.697 [2024-07-13 01:01:04.199374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.697 [2024-07-13 01:01:04.199381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.697 [2024-07-13 01:01:04.199387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.697 [2024-07-13 01:01:04.199400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.697 qpair failed and we were unable to recover it. 00:35:52.697 [2024-07-13 01:01:04.209333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.697 [2024-07-13 01:01:04.209396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.697 [2024-07-13 01:01:04.209411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.697 [2024-07-13 01:01:04.209417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.697 [2024-07-13 01:01:04.209423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.697 [2024-07-13 01:01:04.209436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.697 qpair failed and we were unable to recover it. 00:35:52.697 [2024-07-13 01:01:04.219395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.697 [2024-07-13 01:01:04.219449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.697 [2024-07-13 01:01:04.219463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.697 [2024-07-13 01:01:04.219470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.697 [2024-07-13 01:01:04.219476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.697 [2024-07-13 01:01:04.219489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.697 qpair failed and we were unable to recover it. 00:35:52.697 [2024-07-13 01:01:04.229392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.697 [2024-07-13 01:01:04.229442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.697 [2024-07-13 01:01:04.229456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.697 [2024-07-13 01:01:04.229462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.697 [2024-07-13 01:01:04.229468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.697 [2024-07-13 01:01:04.229484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.697 qpair failed and we were unable to recover it. 00:35:52.697 [2024-07-13 01:01:04.239417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.697 [2024-07-13 01:01:04.239474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.697 [2024-07-13 01:01:04.239489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.697 [2024-07-13 01:01:04.239496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.697 [2024-07-13 01:01:04.239502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.697 [2024-07-13 01:01:04.239515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.697 qpair failed and we were unable to recover it. 00:35:52.697 [2024-07-13 01:01:04.249463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.697 [2024-07-13 01:01:04.249517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.697 [2024-07-13 01:01:04.249532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.697 [2024-07-13 01:01:04.249538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.697 [2024-07-13 01:01:04.249544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.697 [2024-07-13 01:01:04.249557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.697 qpair failed and we were unable to recover it. 00:35:52.957 [2024-07-13 01:01:04.259485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.957 [2024-07-13 01:01:04.259538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.957 [2024-07-13 01:01:04.259554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.957 [2024-07-13 01:01:04.259560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.957 [2024-07-13 01:01:04.259567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.957 [2024-07-13 01:01:04.259580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-07-13 01:01:04.269515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.957 [2024-07-13 01:01:04.269589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.957 [2024-07-13 01:01:04.269603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.957 [2024-07-13 01:01:04.269610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.957 [2024-07-13 01:01:04.269616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.957 [2024-07-13 01:01:04.269629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-07-13 01:01:04.279539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.957 [2024-07-13 01:01:04.279636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.957 [2024-07-13 01:01:04.279654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.957 [2024-07-13 01:01:04.279660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.957 [2024-07-13 01:01:04.279665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.957 [2024-07-13 01:01:04.279678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-07-13 01:01:04.289494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.957 [2024-07-13 01:01:04.289548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.957 [2024-07-13 01:01:04.289563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.957 [2024-07-13 01:01:04.289569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.957 [2024-07-13 01:01:04.289575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.957 [2024-07-13 01:01:04.289588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-07-13 01:01:04.299585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.957 [2024-07-13 01:01:04.299634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.957 [2024-07-13 01:01:04.299648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.957 [2024-07-13 01:01:04.299655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.957 [2024-07-13 01:01:04.299661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.957 [2024-07-13 01:01:04.299674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-07-13 01:01:04.309616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.957 [2024-07-13 01:01:04.309669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.957 [2024-07-13 01:01:04.309683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.957 [2024-07-13 01:01:04.309690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.957 [2024-07-13 01:01:04.309695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.957 [2024-07-13 01:01:04.309709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.957 qpair failed and we were unable to recover it. 00:35:52.957 [2024-07-13 01:01:04.319637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.958 [2024-07-13 01:01:04.319693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.958 [2024-07-13 01:01:04.319709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.958 [2024-07-13 01:01:04.319715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.958 [2024-07-13 01:01:04.319724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.958 [2024-07-13 01:01:04.319737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-07-13 01:01:04.329685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.958 [2024-07-13 01:01:04.329743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.958 [2024-07-13 01:01:04.329758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.958 [2024-07-13 01:01:04.329764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.958 [2024-07-13 01:01:04.329770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.958 [2024-07-13 01:01:04.329784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-07-13 01:01:04.339705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.958 [2024-07-13 01:01:04.339756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.958 [2024-07-13 01:01:04.339770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.958 [2024-07-13 01:01:04.339776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.958 [2024-07-13 01:01:04.339782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.958 [2024-07-13 01:01:04.339796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-07-13 01:01:04.349733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.958 [2024-07-13 01:01:04.349789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.958 [2024-07-13 01:01:04.349804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.958 [2024-07-13 01:01:04.349810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.958 [2024-07-13 01:01:04.349816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.958 [2024-07-13 01:01:04.349829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-07-13 01:01:04.359781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.958 [2024-07-13 01:01:04.359838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.958 [2024-07-13 01:01:04.359853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.958 [2024-07-13 01:01:04.359859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.958 [2024-07-13 01:01:04.359865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.958 [2024-07-13 01:01:04.359878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-07-13 01:01:04.369802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.958 [2024-07-13 01:01:04.369863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.958 [2024-07-13 01:01:04.369878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.958 [2024-07-13 01:01:04.369885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.958 [2024-07-13 01:01:04.369891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.958 [2024-07-13 01:01:04.369903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-07-13 01:01:04.379812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.958 [2024-07-13 01:01:04.379872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.958 [2024-07-13 01:01:04.379887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.958 [2024-07-13 01:01:04.379893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.958 [2024-07-13 01:01:04.379902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.958 [2024-07-13 01:01:04.379915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-07-13 01:01:04.389851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.958 [2024-07-13 01:01:04.389911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.958 [2024-07-13 01:01:04.389926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.958 [2024-07-13 01:01:04.389934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.958 [2024-07-13 01:01:04.389941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.958 [2024-07-13 01:01:04.389954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-07-13 01:01:04.399904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.958 [2024-07-13 01:01:04.399963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.958 [2024-07-13 01:01:04.399977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.958 [2024-07-13 01:01:04.399984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.958 [2024-07-13 01:01:04.399990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.958 [2024-07-13 01:01:04.400002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-07-13 01:01:04.409845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.958 [2024-07-13 01:01:04.409903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.958 [2024-07-13 01:01:04.409921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.958 [2024-07-13 01:01:04.409929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.958 [2024-07-13 01:01:04.409941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.958 [2024-07-13 01:01:04.409957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-07-13 01:01:04.419989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.958 [2024-07-13 01:01:04.420094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.958 [2024-07-13 01:01:04.420108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.958 [2024-07-13 01:01:04.420115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.958 [2024-07-13 01:01:04.420120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.958 [2024-07-13 01:01:04.420134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-07-13 01:01:04.429972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.958 [2024-07-13 01:01:04.430026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.958 [2024-07-13 01:01:04.430040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.958 [2024-07-13 01:01:04.430047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.958 [2024-07-13 01:01:04.430053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.958 [2024-07-13 01:01:04.430067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-07-13 01:01:04.439992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.958 [2024-07-13 01:01:04.440047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.958 [2024-07-13 01:01:04.440061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.958 [2024-07-13 01:01:04.440067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.958 [2024-07-13 01:01:04.440073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.958 [2024-07-13 01:01:04.440086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-07-13 01:01:04.449997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.958 [2024-07-13 01:01:04.450053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.958 [2024-07-13 01:01:04.450068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.958 [2024-07-13 01:01:04.450075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.958 [2024-07-13 01:01:04.450081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.958 [2024-07-13 01:01:04.450094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.958 qpair failed and we were unable to recover it. 00:35:52.958 [2024-07-13 01:01:04.460096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.959 [2024-07-13 01:01:04.460158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.959 [2024-07-13 01:01:04.460174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.959 [2024-07-13 01:01:04.460180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.959 [2024-07-13 01:01:04.460186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.959 [2024-07-13 01:01:04.460199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-07-13 01:01:04.470009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.959 [2024-07-13 01:01:04.470065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.959 [2024-07-13 01:01:04.470080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.959 [2024-07-13 01:01:04.470087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.959 [2024-07-13 01:01:04.470093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.959 [2024-07-13 01:01:04.470106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-07-13 01:01:04.480158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.959 [2024-07-13 01:01:04.480219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.959 [2024-07-13 01:01:04.480241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.959 [2024-07-13 01:01:04.480248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.959 [2024-07-13 01:01:04.480253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.959 [2024-07-13 01:01:04.480267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-07-13 01:01:04.490084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.959 [2024-07-13 01:01:04.490138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.959 [2024-07-13 01:01:04.490153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.959 [2024-07-13 01:01:04.490159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.959 [2024-07-13 01:01:04.490165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.959 [2024-07-13 01:01:04.490178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-07-13 01:01:04.500168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.959 [2024-07-13 01:01:04.500221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.959 [2024-07-13 01:01:04.500241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.959 [2024-07-13 01:01:04.500247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.959 [2024-07-13 01:01:04.500256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.959 [2024-07-13 01:01:04.500270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.959 qpair failed and we were unable to recover it. 00:35:52.959 [2024-07-13 01:01:04.510183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:52.959 [2024-07-13 01:01:04.510246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:52.959 [2024-07-13 01:01:04.510261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:52.959 [2024-07-13 01:01:04.510268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:52.959 [2024-07-13 01:01:04.510274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:52.959 [2024-07-13 01:01:04.510288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.959 qpair failed and we were unable to recover it. 00:35:53.219 [2024-07-13 01:01:04.520266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.219 [2024-07-13 01:01:04.520322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.219 [2024-07-13 01:01:04.520338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.219 [2024-07-13 01:01:04.520347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.219 [2024-07-13 01:01:04.520353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.219 [2024-07-13 01:01:04.520366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.219 qpair failed and we were unable to recover it. 00:35:53.219 [2024-07-13 01:01:04.530188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.219 [2024-07-13 01:01:04.530253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.219 [2024-07-13 01:01:04.530268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.219 [2024-07-13 01:01:04.530275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.219 [2024-07-13 01:01:04.530280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.219 [2024-07-13 01:01:04.530294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.219 qpair failed and we were unable to recover it. 00:35:53.219 [2024-07-13 01:01:04.540256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.219 [2024-07-13 01:01:04.540329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.219 [2024-07-13 01:01:04.540343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.219 [2024-07-13 01:01:04.540350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.219 [2024-07-13 01:01:04.540355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.219 [2024-07-13 01:01:04.540369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.219 qpair failed and we were unable to recover it. 00:35:53.219 [2024-07-13 01:01:04.550344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.219 [2024-07-13 01:01:04.550395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.219 [2024-07-13 01:01:04.550410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.219 [2024-07-13 01:01:04.550416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.219 [2024-07-13 01:01:04.550422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.219 [2024-07-13 01:01:04.550435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.219 qpair failed and we were unable to recover it. 00:35:53.219 [2024-07-13 01:01:04.560260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.219 [2024-07-13 01:01:04.560314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.219 [2024-07-13 01:01:04.560329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.219 [2024-07-13 01:01:04.560336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.219 [2024-07-13 01:01:04.560342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.220 [2024-07-13 01:01:04.560355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.220 qpair failed and we were unable to recover it. 00:35:53.220 [2024-07-13 01:01:04.570305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.220 [2024-07-13 01:01:04.570357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.220 [2024-07-13 01:01:04.570371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.220 [2024-07-13 01:01:04.570377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.220 [2024-07-13 01:01:04.570383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.220 [2024-07-13 01:01:04.570395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.220 qpair failed and we were unable to recover it. 00:35:53.220 [2024-07-13 01:01:04.580400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.220 [2024-07-13 01:01:04.580455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.220 [2024-07-13 01:01:04.580470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.220 [2024-07-13 01:01:04.580476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.220 [2024-07-13 01:01:04.580482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.220 [2024-07-13 01:01:04.580496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.220 qpair failed and we were unable to recover it. 00:35:53.220 [2024-07-13 01:01:04.590411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.220 [2024-07-13 01:01:04.590464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.220 [2024-07-13 01:01:04.590478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.220 [2024-07-13 01:01:04.590488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.220 [2024-07-13 01:01:04.590494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.220 [2024-07-13 01:01:04.590507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.220 qpair failed and we were unable to recover it. 00:35:53.220 [2024-07-13 01:01:04.600485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.220 [2024-07-13 01:01:04.600565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.220 [2024-07-13 01:01:04.600580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.220 [2024-07-13 01:01:04.600586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.220 [2024-07-13 01:01:04.600592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.220 [2024-07-13 01:01:04.600605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.220 qpair failed and we were unable to recover it. 00:35:53.220 [2024-07-13 01:01:04.610478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.220 [2024-07-13 01:01:04.610536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.220 [2024-07-13 01:01:04.610550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.220 [2024-07-13 01:01:04.610557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.220 [2024-07-13 01:01:04.610563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.220 [2024-07-13 01:01:04.610576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.220 qpair failed and we were unable to recover it. 00:35:53.220 [2024-07-13 01:01:04.620524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.220 [2024-07-13 01:01:04.620583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.220 [2024-07-13 01:01:04.620598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.220 [2024-07-13 01:01:04.620605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.220 [2024-07-13 01:01:04.620611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.220 [2024-07-13 01:01:04.620624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.220 qpair failed and we were unable to recover it. 00:35:53.220 [2024-07-13 01:01:04.630537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.220 [2024-07-13 01:01:04.630590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.220 [2024-07-13 01:01:04.630605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.220 [2024-07-13 01:01:04.630611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.220 [2024-07-13 01:01:04.630617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.220 [2024-07-13 01:01:04.630630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.220 qpair failed and we were unable to recover it. 00:35:53.220 [2024-07-13 01:01:04.640472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.220 [2024-07-13 01:01:04.640527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.220 [2024-07-13 01:01:04.640542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.220 [2024-07-13 01:01:04.640549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.220 [2024-07-13 01:01:04.640554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.220 [2024-07-13 01:01:04.640567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.220 qpair failed and we were unable to recover it. 00:35:53.220 [2024-07-13 01:01:04.650591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.220 [2024-07-13 01:01:04.650647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.220 [2024-07-13 01:01:04.650662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.220 [2024-07-13 01:01:04.650669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.220 [2024-07-13 01:01:04.650674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.220 [2024-07-13 01:01:04.650687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.220 qpair failed and we were unable to recover it. 00:35:53.220 [2024-07-13 01:01:04.660557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.220 [2024-07-13 01:01:04.660608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.220 [2024-07-13 01:01:04.660622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.220 [2024-07-13 01:01:04.660628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.220 [2024-07-13 01:01:04.660634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.220 [2024-07-13 01:01:04.660648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.220 qpair failed and we were unable to recover it. 00:35:53.220 [2024-07-13 01:01:04.670576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.220 [2024-07-13 01:01:04.670632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.220 [2024-07-13 01:01:04.670646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.220 [2024-07-13 01:01:04.670652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.220 [2024-07-13 01:01:04.670659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.220 [2024-07-13 01:01:04.670673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.220 qpair failed and we were unable to recover it. 00:35:53.220 [2024-07-13 01:01:04.680739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.220 [2024-07-13 01:01:04.680790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.220 [2024-07-13 01:01:04.680805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.220 [2024-07-13 01:01:04.680815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.220 [2024-07-13 01:01:04.680820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.220 [2024-07-13 01:01:04.680834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.220 qpair failed and we were unable to recover it. 00:35:53.220 [2024-07-13 01:01:04.690717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.220 [2024-07-13 01:01:04.690776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.220 [2024-07-13 01:01:04.690793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.220 [2024-07-13 01:01:04.690800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.220 [2024-07-13 01:01:04.690806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.220 [2024-07-13 01:01:04.690819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.220 qpair failed and we were unable to recover it. 00:35:53.220 [2024-07-13 01:01:04.700669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.220 [2024-07-13 01:01:04.700726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.220 [2024-07-13 01:01:04.700741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.220 [2024-07-13 01:01:04.700748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.220 [2024-07-13 01:01:04.700754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.220 [2024-07-13 01:01:04.700767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.220 qpair failed and we were unable to recover it. 00:35:53.220 [2024-07-13 01:01:04.710687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.221 [2024-07-13 01:01:04.710746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.221 [2024-07-13 01:01:04.710761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.221 [2024-07-13 01:01:04.710768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.221 [2024-07-13 01:01:04.710774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.221 [2024-07-13 01:01:04.710788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.221 qpair failed and we were unable to recover it. 00:35:53.221 [2024-07-13 01:01:04.720792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.221 [2024-07-13 01:01:04.720849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.221 [2024-07-13 01:01:04.720863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.221 [2024-07-13 01:01:04.720870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.221 [2024-07-13 01:01:04.720876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.221 [2024-07-13 01:01:04.720889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.221 qpair failed and we were unable to recover it. 00:35:53.221 [2024-07-13 01:01:04.730768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.221 [2024-07-13 01:01:04.730823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.221 [2024-07-13 01:01:04.730838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.221 [2024-07-13 01:01:04.730845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.221 [2024-07-13 01:01:04.730851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.221 [2024-07-13 01:01:04.730864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.221 qpair failed and we were unable to recover it. 00:35:53.221 [2024-07-13 01:01:04.740775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.221 [2024-07-13 01:01:04.740834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.221 [2024-07-13 01:01:04.740848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.221 [2024-07-13 01:01:04.740855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.221 [2024-07-13 01:01:04.740860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.221 [2024-07-13 01:01:04.740874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.221 qpair failed and we were unable to recover it. 00:35:53.221 [2024-07-13 01:01:04.750806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.221 [2024-07-13 01:01:04.750861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.221 [2024-07-13 01:01:04.750875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.221 [2024-07-13 01:01:04.750882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.221 [2024-07-13 01:01:04.750887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.221 [2024-07-13 01:01:04.750900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.221 qpair failed and we were unable to recover it. 00:35:53.221 [2024-07-13 01:01:04.760920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.221 [2024-07-13 01:01:04.760971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.221 [2024-07-13 01:01:04.760986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.221 [2024-07-13 01:01:04.760993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.221 [2024-07-13 01:01:04.760999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.221 [2024-07-13 01:01:04.761013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.221 qpair failed and we were unable to recover it. 00:35:53.221 [2024-07-13 01:01:04.770869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.221 [2024-07-13 01:01:04.770922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.221 [2024-07-13 01:01:04.770941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.221 [2024-07-13 01:01:04.770948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.221 [2024-07-13 01:01:04.770953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.221 [2024-07-13 01:01:04.770967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.221 qpair failed and we were unable to recover it. 00:35:53.481 [2024-07-13 01:01:04.780971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.481 [2024-07-13 01:01:04.781036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.481 [2024-07-13 01:01:04.781051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.481 [2024-07-13 01:01:04.781057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.481 [2024-07-13 01:01:04.781063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.481 [2024-07-13 01:01:04.781077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.481 qpair failed and we were unable to recover it. 00:35:53.481 [2024-07-13 01:01:04.790999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.481 [2024-07-13 01:01:04.791049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.481 [2024-07-13 01:01:04.791064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.481 [2024-07-13 01:01:04.791070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.481 [2024-07-13 01:01:04.791076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.481 [2024-07-13 01:01:04.791089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.481 qpair failed and we were unable to recover it. 00:35:53.481 [2024-07-13 01:01:04.801012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.481 [2024-07-13 01:01:04.801066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.481 [2024-07-13 01:01:04.801081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.481 [2024-07-13 01:01:04.801088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.481 [2024-07-13 01:01:04.801094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.481 [2024-07-13 01:01:04.801107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.481 qpair failed and we were unable to recover it. 00:35:53.481 [2024-07-13 01:01:04.811083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.481 [2024-07-13 01:01:04.811156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.481 [2024-07-13 01:01:04.811172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.481 [2024-07-13 01:01:04.811179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.481 [2024-07-13 01:01:04.811184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.481 [2024-07-13 01:01:04.811199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.481 qpair failed and we were unable to recover it. 00:35:53.481 [2024-07-13 01:01:04.821011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.481 [2024-07-13 01:01:04.821066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.481 [2024-07-13 01:01:04.821081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.481 [2024-07-13 01:01:04.821088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.481 [2024-07-13 01:01:04.821094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.481 [2024-07-13 01:01:04.821107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.481 qpair failed and we were unable to recover it. 00:35:53.481 [2024-07-13 01:01:04.831101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.481 [2024-07-13 01:01:04.831152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.481 [2024-07-13 01:01:04.831167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.481 [2024-07-13 01:01:04.831173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.481 [2024-07-13 01:01:04.831179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.481 [2024-07-13 01:01:04.831193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.481 qpair failed and we were unable to recover it. 00:35:53.481 [2024-07-13 01:01:04.841126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.481 [2024-07-13 01:01:04.841182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.481 [2024-07-13 01:01:04.841197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.481 [2024-07-13 01:01:04.841203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.481 [2024-07-13 01:01:04.841209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.481 [2024-07-13 01:01:04.841223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.481 qpair failed and we were unable to recover it. 00:35:53.481 [2024-07-13 01:01:04.851199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.481 [2024-07-13 01:01:04.851262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.481 [2024-07-13 01:01:04.851277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.481 [2024-07-13 01:01:04.851283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.481 [2024-07-13 01:01:04.851289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.481 [2024-07-13 01:01:04.851302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.481 qpair failed and we were unable to recover it. 00:35:53.481 [2024-07-13 01:01:04.861218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.481 [2024-07-13 01:01:04.861273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.481 [2024-07-13 01:01:04.861290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.481 [2024-07-13 01:01:04.861297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.481 [2024-07-13 01:01:04.861303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.481 [2024-07-13 01:01:04.861317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.481 qpair failed and we were unable to recover it. 00:35:53.481 [2024-07-13 01:01:04.871215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.481 [2024-07-13 01:01:04.871271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.481 [2024-07-13 01:01:04.871285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.481 [2024-07-13 01:01:04.871292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.481 [2024-07-13 01:01:04.871297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.481 [2024-07-13 01:01:04.871311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.481 qpair failed and we were unable to recover it. 00:35:53.481 [2024-07-13 01:01:04.881244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.481 [2024-07-13 01:01:04.881298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.481 [2024-07-13 01:01:04.881312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.481 [2024-07-13 01:01:04.881318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.481 [2024-07-13 01:01:04.881324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.481 [2024-07-13 01:01:04.881337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.481 qpair failed and we were unable to recover it. 00:35:53.481 [2024-07-13 01:01:04.891304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.481 [2024-07-13 01:01:04.891373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.481 [2024-07-13 01:01:04.891387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.481 [2024-07-13 01:01:04.891394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.481 [2024-07-13 01:01:04.891400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.481 [2024-07-13 01:01:04.891413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.481 qpair failed and we were unable to recover it. 00:35:53.481 [2024-07-13 01:01:04.901302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.481 [2024-07-13 01:01:04.901358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.481 [2024-07-13 01:01:04.901372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.481 [2024-07-13 01:01:04.901378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.481 [2024-07-13 01:01:04.901384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.481 [2024-07-13 01:01:04.901404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.481 qpair failed and we were unable to recover it. 00:35:53.481 [2024-07-13 01:01:04.911341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.481 [2024-07-13 01:01:04.911400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.481 [2024-07-13 01:01:04.911414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.481 [2024-07-13 01:01:04.911421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.481 [2024-07-13 01:01:04.911427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.481 [2024-07-13 01:01:04.911440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.481 qpair failed and we were unable to recover it. 00:35:53.481 [2024-07-13 01:01:04.921310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.482 [2024-07-13 01:01:04.921366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.482 [2024-07-13 01:01:04.921381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.482 [2024-07-13 01:01:04.921387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.482 [2024-07-13 01:01:04.921394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.482 [2024-07-13 01:01:04.921407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.482 qpair failed and we were unable to recover it. 00:35:53.482 [2024-07-13 01:01:04.931328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.482 [2024-07-13 01:01:04.931424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.482 [2024-07-13 01:01:04.931438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.482 [2024-07-13 01:01:04.931445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.482 [2024-07-13 01:01:04.931450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.482 [2024-07-13 01:01:04.931464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.482 qpair failed and we were unable to recover it. 00:35:53.482 [2024-07-13 01:01:04.941413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.482 [2024-07-13 01:01:04.941470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.482 [2024-07-13 01:01:04.941484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.482 [2024-07-13 01:01:04.941490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.482 [2024-07-13 01:01:04.941496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.482 [2024-07-13 01:01:04.941509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.482 qpair failed and we were unable to recover it. 00:35:53.482 [2024-07-13 01:01:04.951443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.482 [2024-07-13 01:01:04.951500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.482 [2024-07-13 01:01:04.951517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.482 [2024-07-13 01:01:04.951524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.482 [2024-07-13 01:01:04.951529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.482 [2024-07-13 01:01:04.951542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.482 qpair failed and we were unable to recover it. 00:35:53.482 [2024-07-13 01:01:04.961480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.482 [2024-07-13 01:01:04.961537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.482 [2024-07-13 01:01:04.961552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.482 [2024-07-13 01:01:04.961558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.482 [2024-07-13 01:01:04.961564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.482 [2024-07-13 01:01:04.961577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.482 qpair failed and we were unable to recover it. 00:35:53.482 [2024-07-13 01:01:04.971503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.482 [2024-07-13 01:01:04.971559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.482 [2024-07-13 01:01:04.971572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.482 [2024-07-13 01:01:04.971579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.482 [2024-07-13 01:01:04.971585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.482 [2024-07-13 01:01:04.971598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.482 qpair failed and we were unable to recover it. 00:35:53.482 [2024-07-13 01:01:04.981537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.482 [2024-07-13 01:01:04.981593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.482 [2024-07-13 01:01:04.981607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.482 [2024-07-13 01:01:04.981614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.482 [2024-07-13 01:01:04.981619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.482 [2024-07-13 01:01:04.981633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.482 qpair failed and we were unable to recover it. 00:35:53.482 [2024-07-13 01:01:04.991596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.482 [2024-07-13 01:01:04.991652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.482 [2024-07-13 01:01:04.991666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.482 [2024-07-13 01:01:04.991672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.482 [2024-07-13 01:01:04.991678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.482 [2024-07-13 01:01:04.991694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.482 qpair failed and we were unable to recover it. 00:35:53.482 [2024-07-13 01:01:05.001597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.482 [2024-07-13 01:01:05.001647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.482 [2024-07-13 01:01:05.001661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.482 [2024-07-13 01:01:05.001668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.482 [2024-07-13 01:01:05.001675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.482 [2024-07-13 01:01:05.001689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.482 qpair failed and we were unable to recover it. 00:35:53.482 [2024-07-13 01:01:05.011557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.482 [2024-07-13 01:01:05.011609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.482 [2024-07-13 01:01:05.011625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.482 [2024-07-13 01:01:05.011631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.482 [2024-07-13 01:01:05.011638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.482 [2024-07-13 01:01:05.011651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.482 qpair failed and we were unable to recover it. 00:35:53.482 [2024-07-13 01:01:05.021581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.482 [2024-07-13 01:01:05.021635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.482 [2024-07-13 01:01:05.021650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.482 [2024-07-13 01:01:05.021657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.482 [2024-07-13 01:01:05.021663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.482 [2024-07-13 01:01:05.021677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.482 qpair failed and we were unable to recover it. 00:35:53.482 [2024-07-13 01:01:05.031692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.482 [2024-07-13 01:01:05.031745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.482 [2024-07-13 01:01:05.031760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.482 [2024-07-13 01:01:05.031767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.482 [2024-07-13 01:01:05.031774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.482 [2024-07-13 01:01:05.031788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.482 qpair failed and we were unable to recover it. 00:35:53.743 [2024-07-13 01:01:05.041707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.743 [2024-07-13 01:01:05.041760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.743 [2024-07-13 01:01:05.041777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.743 [2024-07-13 01:01:05.041784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.743 [2024-07-13 01:01:05.041790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.743 [2024-07-13 01:01:05.041803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-07-13 01:01:05.051671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.743 [2024-07-13 01:01:05.051725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.743 [2024-07-13 01:01:05.051739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.743 [2024-07-13 01:01:05.051745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.743 [2024-07-13 01:01:05.051751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.743 [2024-07-13 01:01:05.051764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-07-13 01:01:05.061732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.743 [2024-07-13 01:01:05.061786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.743 [2024-07-13 01:01:05.061800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.743 [2024-07-13 01:01:05.061807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.743 [2024-07-13 01:01:05.061813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.743 [2024-07-13 01:01:05.061826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-07-13 01:01:05.071799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.743 [2024-07-13 01:01:05.071884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.743 [2024-07-13 01:01:05.071899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.743 [2024-07-13 01:01:05.071905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.743 [2024-07-13 01:01:05.071911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.743 [2024-07-13 01:01:05.071924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-07-13 01:01:05.081816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.743 [2024-07-13 01:01:05.081868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.743 [2024-07-13 01:01:05.081883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.743 [2024-07-13 01:01:05.081889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.743 [2024-07-13 01:01:05.081898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.743 [2024-07-13 01:01:05.081911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-07-13 01:01:05.091853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.743 [2024-07-13 01:01:05.091909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.743 [2024-07-13 01:01:05.091924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.743 [2024-07-13 01:01:05.091930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.743 [2024-07-13 01:01:05.091936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.743 [2024-07-13 01:01:05.091949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-07-13 01:01:05.101822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.743 [2024-07-13 01:01:05.101875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.743 [2024-07-13 01:01:05.101890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.743 [2024-07-13 01:01:05.101896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.743 [2024-07-13 01:01:05.101903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.743 [2024-07-13 01:01:05.101916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-07-13 01:01:05.111936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.743 [2024-07-13 01:01:05.111985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.743 [2024-07-13 01:01:05.111999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.743 [2024-07-13 01:01:05.112005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.743 [2024-07-13 01:01:05.112011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.743 [2024-07-13 01:01:05.112024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-07-13 01:01:05.121980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.743 [2024-07-13 01:01:05.122034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.743 [2024-07-13 01:01:05.122049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.743 [2024-07-13 01:01:05.122056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.743 [2024-07-13 01:01:05.122061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.743 [2024-07-13 01:01:05.122075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-07-13 01:01:05.131975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.743 [2024-07-13 01:01:05.132034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.743 [2024-07-13 01:01:05.132049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.743 [2024-07-13 01:01:05.132056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.743 [2024-07-13 01:01:05.132061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.743 [2024-07-13 01:01:05.132075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-07-13 01:01:05.142006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.743 [2024-07-13 01:01:05.142061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.743 [2024-07-13 01:01:05.142075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.743 [2024-07-13 01:01:05.142082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.743 [2024-07-13 01:01:05.142087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.743 [2024-07-13 01:01:05.142101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-07-13 01:01:05.152037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.743 [2024-07-13 01:01:05.152090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.743 [2024-07-13 01:01:05.152104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.743 [2024-07-13 01:01:05.152111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.743 [2024-07-13 01:01:05.152117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.743 [2024-07-13 01:01:05.152130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-07-13 01:01:05.162057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.743 [2024-07-13 01:01:05.162110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.743 [2024-07-13 01:01:05.162124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.743 [2024-07-13 01:01:05.162131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.744 [2024-07-13 01:01:05.162137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.744 [2024-07-13 01:01:05.162150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-07-13 01:01:05.172087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.744 [2024-07-13 01:01:05.172140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.744 [2024-07-13 01:01:05.172155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.744 [2024-07-13 01:01:05.172161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.744 [2024-07-13 01:01:05.172170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.744 [2024-07-13 01:01:05.172184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-07-13 01:01:05.182141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.744 [2024-07-13 01:01:05.182195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.744 [2024-07-13 01:01:05.182209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.744 [2024-07-13 01:01:05.182216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.744 [2024-07-13 01:01:05.182222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.744 [2024-07-13 01:01:05.182241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-07-13 01:01:05.192175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.744 [2024-07-13 01:01:05.192231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.744 [2024-07-13 01:01:05.192246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.744 [2024-07-13 01:01:05.192252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.744 [2024-07-13 01:01:05.192258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.744 [2024-07-13 01:01:05.192272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-07-13 01:01:05.202203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.744 [2024-07-13 01:01:05.202254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.744 [2024-07-13 01:01:05.202269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.744 [2024-07-13 01:01:05.202275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.744 [2024-07-13 01:01:05.202281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.744 [2024-07-13 01:01:05.202294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-07-13 01:01:05.212221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.744 [2024-07-13 01:01:05.212284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.744 [2024-07-13 01:01:05.212299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.744 [2024-07-13 01:01:05.212305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.744 [2024-07-13 01:01:05.212310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.744 [2024-07-13 01:01:05.212324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-07-13 01:01:05.222255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.744 [2024-07-13 01:01:05.222317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.744 [2024-07-13 01:01:05.222332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.744 [2024-07-13 01:01:05.222339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.744 [2024-07-13 01:01:05.222344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.744 [2024-07-13 01:01:05.222357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-07-13 01:01:05.232205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.744 [2024-07-13 01:01:05.232260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.744 [2024-07-13 01:01:05.232275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.744 [2024-07-13 01:01:05.232281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.744 [2024-07-13 01:01:05.232287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.744 [2024-07-13 01:01:05.232301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-07-13 01:01:05.242333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.744 [2024-07-13 01:01:05.242419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.744 [2024-07-13 01:01:05.242433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.744 [2024-07-13 01:01:05.242439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.744 [2024-07-13 01:01:05.242444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.744 [2024-07-13 01:01:05.242457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-07-13 01:01:05.252280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.744 [2024-07-13 01:01:05.252333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.744 [2024-07-13 01:01:05.252348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.744 [2024-07-13 01:01:05.252355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.744 [2024-07-13 01:01:05.252361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.744 [2024-07-13 01:01:05.252374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-07-13 01:01:05.262310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.744 [2024-07-13 01:01:05.262365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.744 [2024-07-13 01:01:05.262380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.744 [2024-07-13 01:01:05.262387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.744 [2024-07-13 01:01:05.262396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.744 [2024-07-13 01:01:05.262410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-07-13 01:01:05.272399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.744 [2024-07-13 01:01:05.272462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.744 [2024-07-13 01:01:05.272477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.744 [2024-07-13 01:01:05.272483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.744 [2024-07-13 01:01:05.272489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.744 [2024-07-13 01:01:05.272503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-07-13 01:01:05.282361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.744 [2024-07-13 01:01:05.282412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.744 [2024-07-13 01:01:05.282427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.744 [2024-07-13 01:01:05.282433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.744 [2024-07-13 01:01:05.282439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.744 [2024-07-13 01:01:05.282453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-07-13 01:01:05.292452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:53.744 [2024-07-13 01:01:05.292507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:53.744 [2024-07-13 01:01:05.292521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:53.744 [2024-07-13 01:01:05.292527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:53.744 [2024-07-13 01:01:05.292533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:53.744 [2024-07-13 01:01:05.292547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.744 qpair failed and we were unable to recover it. 00:35:54.004 [2024-07-13 01:01:05.302477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.004 [2024-07-13 01:01:05.302535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.004 [2024-07-13 01:01:05.302550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.004 [2024-07-13 01:01:05.302556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.004 [2024-07-13 01:01:05.302562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:54.004 [2024-07-13 01:01:05.302576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:54.004 qpair failed and we were unable to recover it. 00:35:54.004 [2024-07-13 01:01:05.312531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.004 [2024-07-13 01:01:05.312583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.004 [2024-07-13 01:01:05.312598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.004 [2024-07-13 01:01:05.312605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.004 [2024-07-13 01:01:05.312610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:54.004 [2024-07-13 01:01:05.312623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:54.004 qpair failed and we were unable to recover it. 00:35:54.004 [2024-07-13 01:01:05.322534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.004 [2024-07-13 01:01:05.322585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.004 [2024-07-13 01:01:05.322599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.004 [2024-07-13 01:01:05.322605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.004 [2024-07-13 01:01:05.322611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:54.004 [2024-07-13 01:01:05.322624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:54.004 qpair failed and we were unable to recover it. 00:35:54.004 [2024-07-13 01:01:05.332585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.004 [2024-07-13 01:01:05.332639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.004 [2024-07-13 01:01:05.332654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.004 [2024-07-13 01:01:05.332661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.004 [2024-07-13 01:01:05.332666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:54.004 [2024-07-13 01:01:05.332681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:54.004 qpair failed and we were unable to recover it. 00:35:54.004 [2024-07-13 01:01:05.342622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.004 [2024-07-13 01:01:05.342679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.004 [2024-07-13 01:01:05.342693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.004 [2024-07-13 01:01:05.342700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.004 [2024-07-13 01:01:05.342705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:54.004 [2024-07-13 01:01:05.342719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:54.004 qpair failed and we were unable to recover it. 00:35:54.004 [2024-07-13 01:01:05.352610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.004 [2024-07-13 01:01:05.352658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.004 [2024-07-13 01:01:05.352673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.004 [2024-07-13 01:01:05.352682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.004 [2024-07-13 01:01:05.352688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:54.004 [2024-07-13 01:01:05.352701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:54.004 qpair failed and we were unable to recover it. 00:35:54.004 [2024-07-13 01:01:05.362644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.004 [2024-07-13 01:01:05.362695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.004 [2024-07-13 01:01:05.362709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.004 [2024-07-13 01:01:05.362716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.004 [2024-07-13 01:01:05.362722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:54.004 [2024-07-13 01:01:05.362735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:54.004 qpair failed and we were unable to recover it. 00:35:54.004 [2024-07-13 01:01:05.372682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.004 [2024-07-13 01:01:05.372738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.004 [2024-07-13 01:01:05.372753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.004 [2024-07-13 01:01:05.372759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.004 [2024-07-13 01:01:05.372765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:54.004 [2024-07-13 01:01:05.372778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:54.004 qpair failed and we were unable to recover it. 00:35:54.004 [2024-07-13 01:01:05.382730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.004 [2024-07-13 01:01:05.382789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.004 [2024-07-13 01:01:05.382803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.004 [2024-07-13 01:01:05.382810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.004 [2024-07-13 01:01:05.382815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:54.004 [2024-07-13 01:01:05.382828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:54.004 qpair failed and we were unable to recover it. 00:35:54.004 [2024-07-13 01:01:05.392764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.004 [2024-07-13 01:01:05.392818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.004 [2024-07-13 01:01:05.392833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.004 [2024-07-13 01:01:05.392839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.004 [2024-07-13 01:01:05.392845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1321b60 00:35:54.004 [2024-07-13 01:01:05.392859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:54.004 qpair failed and we were unable to recover it. 00:35:54.004 [2024-07-13 01:01:05.402792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.004 [2024-07-13 01:01:05.402896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.004 [2024-07-13 01:01:05.402948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.004 [2024-07-13 01:01:05.402972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.004 [2024-07-13 01:01:05.402991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa960000b90 00:35:54.004 [2024-07-13 01:01:05.403040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:54.004 qpair failed and we were unable to recover it. 00:35:54.005 [2024-07-13 01:01:05.412779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:54.005 [2024-07-13 01:01:05.412855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:54.005 [2024-07-13 01:01:05.412883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:54.005 [2024-07-13 01:01:05.412898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:54.005 [2024-07-13 01:01:05.412910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa960000b90 00:35:54.005 [2024-07-13 01:01:05.412939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:54.005 qpair failed and we were unable to recover it. 00:35:54.005 [2024-07-13 01:01:05.413036] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:35:54.005 A controller has encountered a failure and is being reset. 00:35:54.005 [2024-07-13 01:01:05.413126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132fb60 (9): Bad file descriptor 00:35:54.005 Controller properly reset. 00:35:54.005 Initializing NVMe Controllers 00:35:54.005 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:54.005 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:54.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:54.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:54.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:54.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:54.005 Initialization complete. Launching workers. 00:35:54.005 Starting thread on core 1 00:35:54.005 Starting thread on core 2 00:35:54.005 Starting thread on core 3 00:35:54.005 Starting thread on core 0 00:35:54.005 01:01:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:54.005 00:35:54.005 real 0m11.243s 00:35:54.005 user 0m22.075s 00:35:54.005 sys 0m4.609s 00:35:54.005 01:01:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:54.005 01:01:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:54.005 ************************************ 00:35:54.005 END TEST nvmf_target_disconnect_tc2 00:35:54.005 ************************************ 00:35:54.005 01:01:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:35:54.005 01:01:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:54.005 01:01:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:54.005 01:01:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:54.005 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:54.005 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:35:54.005 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:54.005 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:35:54.005 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:54.005 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:54.005 rmmod nvme_tcp 00:35:54.005 rmmod nvme_fabrics 00:35:54.264 rmmod nvme_keyring 00:35:54.264 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:54.264 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:35:54.264 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:35:54.264 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1620678 ']' 00:35:54.264 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1620678 00:35:54.264 01:01:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1620678 ']' 00:35:54.264 01:01:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1620678 00:35:54.264 01:01:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:35:54.264 01:01:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:54.264 01:01:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1620678 00:35:54.264 01:01:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:35:54.264 01:01:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:35:54.264 01:01:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1620678' 00:35:54.264 killing process with pid 1620678 00:35:54.264 01:01:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1620678 00:35:54.264 01:01:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1620678 00:35:54.524 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:54.524 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:54.524 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:54.524 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:54.524 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:54.524 01:01:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.524 01:01:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:54.524 01:01:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.432 01:01:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:56.432 00:35:56.432 real 0m19.711s 00:35:56.432 user 0m49.144s 00:35:56.432 sys 0m9.345s 00:35:56.432 01:01:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:56.432 01:01:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:56.432 ************************************ 00:35:56.432 END TEST nvmf_target_disconnect 00:35:56.432 ************************************ 00:35:56.432 01:01:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:56.432 01:01:07 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:35:56.432 01:01:07 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:56.432 01:01:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:56.432 01:01:07 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:35:56.432 00:35:56.432 real 28m54.098s 00:35:56.432 user 73m39.583s 00:35:56.432 sys 7m48.281s 00:35:56.432 01:01:07 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:56.432 01:01:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:56.432 ************************************ 00:35:56.432 END TEST nvmf_tcp 00:35:56.432 ************************************ 00:35:56.692 01:01:08 -- common/autotest_common.sh@1142 -- # return 0 00:35:56.692 01:01:08 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:35:56.692 01:01:08 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:56.692 01:01:08 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:56.692 01:01:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:56.692 01:01:08 -- common/autotest_common.sh@10 -- # set +x 00:35:56.692 ************************************ 00:35:56.692 START TEST spdkcli_nvmf_tcp 00:35:56.692 ************************************ 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:56.692 * Looking for test storage... 00:35:56.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.692 01:01:08 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1622205 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1622205 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1622205 ']' 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:56.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:56.693 01:01:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:56.693 [2024-07-13 01:01:08.217981] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:56.693 [2024-07-13 01:01:08.218029] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622205 ] 00:35:56.693 EAL: No free 2048 kB hugepages reported on node 1 00:35:56.952 [2024-07-13 01:01:08.284046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:56.952 [2024-07-13 01:01:08.324413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:56.952 [2024-07-13 01:01:08.324415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.952 01:01:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:56.952 01:01:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:35:56.952 01:01:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:56.952 01:01:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:56.952 01:01:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:56.952 01:01:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:56.952 01:01:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:56.952 01:01:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:56.952 01:01:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:56.952 01:01:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:56.952 01:01:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:56.952 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:56.952 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:56.952 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:56.953 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:56.953 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:56.953 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:56.953 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:56.953 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:56.953 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:56.953 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:56.953 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:56.953 ' 00:35:59.490 [2024-07-13 01:01:11.024174] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:00.865 [2024-07-13 01:01:12.304429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:03.435 [2024-07-13 01:01:14.679705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:05.338 [2024-07-13 01:01:16.734239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:07.239 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:07.239 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:07.239 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:07.239 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:07.239 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:07.239 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:07.239 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:07.239 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:07.239 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:07.239 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:07.239 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:07.239 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:07.239 01:01:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:07.239 01:01:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:07.239 01:01:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:07.239 01:01:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:07.239 01:01:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:07.239 01:01:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:07.239 01:01:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:07.239 01:01:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:07.497 01:01:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:07.497 01:01:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:07.497 01:01:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:07.497 01:01:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:07.497 01:01:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:07.497 01:01:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:07.497 01:01:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:07.497 01:01:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:07.497 01:01:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:07.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:07.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:07.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:07.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:07.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:07.497 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:07.497 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:07.497 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:07.497 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:07.497 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:07.497 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:07.497 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:07.497 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:07.497 ' 00:36:12.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:12.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:12.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:12.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:12.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:12.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:12.765 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:12.765 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:12.765 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:12.765 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:12.765 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:12.765 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:12.765 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:12.765 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:12.765 01:01:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:12.765 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:12.765 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:12.765 01:01:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1622205 00:36:12.765 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1622205 ']' 00:36:12.765 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1622205 00:36:12.765 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:36:12.765 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:12.765 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1622205 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1622205' 00:36:13.024 killing process with pid 1622205 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1622205 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1622205 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1622205 ']' 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1622205 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1622205 ']' 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1622205 00:36:13.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1622205) - No such process 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1622205 is not found' 00:36:13.024 Process with pid 1622205 is not found 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:13.024 00:36:13.024 real 0m16.475s 00:36:13.024 user 0m35.847s 00:36:13.024 sys 0m0.815s 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:13.024 01:01:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:13.024 ************************************ 00:36:13.024 END TEST spdkcli_nvmf_tcp 00:36:13.024 ************************************ 00:36:13.024 01:01:24 -- common/autotest_common.sh@1142 -- # return 0 00:36:13.024 01:01:24 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:13.024 01:01:24 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:13.024 01:01:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:13.024 01:01:24 -- common/autotest_common.sh@10 -- # set +x 00:36:13.283 ************************************ 00:36:13.283 START TEST nvmf_identify_passthru 00:36:13.283 ************************************ 00:36:13.283 01:01:24 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:13.283 * Looking for test storage... 00:36:13.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:13.283 01:01:24 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:13.283 01:01:24 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:13.283 01:01:24 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:13.283 01:01:24 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:13.283 01:01:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.283 01:01:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.283 01:01:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.283 01:01:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:13.283 01:01:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:13.283 01:01:24 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:13.283 01:01:24 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:13.283 01:01:24 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:13.283 01:01:24 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:13.283 01:01:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.283 01:01:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.283 01:01:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.283 01:01:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:13.283 01:01:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.283 01:01:24 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.283 01:01:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:13.283 01:01:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:13.283 01:01:24 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:36:13.283 01:01:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:19.854 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:19.854 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:19.854 Found net devices under 0000:86:00.0: cvl_0_0 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:19.854 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:19.855 Found net devices under 0000:86:00.1: cvl_0_1 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:19.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:19.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:36:19.855 00:36:19.855 --- 10.0.0.2 ping statistics --- 00:36:19.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.855 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:19.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:19.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:36:19.855 00:36:19.855 --- 10.0.0.1 ping statistics --- 00:36:19.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.855 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:19.855 01:01:30 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:19.855 01:01:30 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:19.855 01:01:30 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:19.855 01:01:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:19.855 01:01:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:19.855 01:01:30 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:36:19.855 01:01:30 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:36:19.855 01:01:30 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:36:19.855 01:01:30 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:36:19.855 01:01:30 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:36:19.855 01:01:30 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:36:19.855 01:01:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:19.855 01:01:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:19.855 01:01:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:36:19.855 01:01:30 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:36:19.855 01:01:30 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:36:19.855 01:01:30 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:36:19.855 01:01:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:36:19.855 01:01:30 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:36:19.855 01:01:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:19.855 01:01:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:19.855 01:01:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:19.855 EAL: No free 2048 kB hugepages reported on node 1 00:36:23.144 01:01:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:36:23.144 01:01:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:23.144 01:01:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:23.144 01:01:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:23.404 EAL: No free 2048 kB hugepages reported on node 1 00:36:27.601 01:01:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:27.601 01:01:38 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:27.601 01:01:38 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:27.601 01:01:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:27.601 01:01:38 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:27.601 01:01:38 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:27.601 01:01:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:27.601 01:01:38 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1629228 00:36:27.601 01:01:38 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:27.601 01:01:38 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:27.601 01:01:38 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1629228 00:36:27.601 01:01:38 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1629228 ']' 00:36:27.601 01:01:38 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:27.601 01:01:38 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:27.601 01:01:38 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:27.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:27.601 01:01:38 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:27.601 01:01:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:27.601 [2024-07-13 01:01:38.881003] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:36:27.601 [2024-07-13 01:01:38.881050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:27.601 EAL: No free 2048 kB hugepages reported on node 1 00:36:27.601 [2024-07-13 01:01:38.953988] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:27.601 [2024-07-13 01:01:38.995162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:27.601 [2024-07-13 01:01:38.995202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:27.601 [2024-07-13 01:01:38.995209] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:27.601 [2024-07-13 01:01:38.995215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:27.601 [2024-07-13 01:01:38.995220] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:27.601 [2024-07-13 01:01:38.995276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.601 [2024-07-13 01:01:38.995388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:27.601 [2024-07-13 01:01:38.995492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:27.601 [2024-07-13 01:01:38.995494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:27.601 01:01:39 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:27.601 01:01:39 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:36:27.601 01:01:39 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:27.602 01:01:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.602 01:01:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:27.602 INFO: Log level set to 20 00:36:27.602 INFO: Requests: 00:36:27.602 { 00:36:27.602 "jsonrpc": "2.0", 00:36:27.602 "method": "nvmf_set_config", 00:36:27.602 "id": 1, 00:36:27.602 "params": { 00:36:27.602 "admin_cmd_passthru": { 00:36:27.602 "identify_ctrlr": true 00:36:27.602 } 00:36:27.602 } 00:36:27.602 } 00:36:27.602 00:36:27.602 INFO: response: 00:36:27.602 { 00:36:27.602 "jsonrpc": "2.0", 00:36:27.602 "id": 1, 00:36:27.602 "result": true 00:36:27.602 } 00:36:27.602 00:36:27.602 01:01:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.602 01:01:39 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:27.602 01:01:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.602 01:01:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:27.602 INFO: Setting log level to 20 00:36:27.602 INFO: Setting log level to 20 00:36:27.602 INFO: Log level set to 20 00:36:27.602 INFO: Log level set to 20 00:36:27.602 INFO: Requests: 00:36:27.602 { 00:36:27.602 "jsonrpc": "2.0", 00:36:27.602 "method": "framework_start_init", 00:36:27.602 "id": 1 00:36:27.602 } 00:36:27.602 00:36:27.602 INFO: Requests: 00:36:27.602 { 00:36:27.602 "jsonrpc": "2.0", 00:36:27.602 "method": "framework_start_init", 00:36:27.602 "id": 1 00:36:27.602 } 00:36:27.602 00:36:27.602 [2024-07-13 01:01:39.107120] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:27.602 INFO: response: 00:36:27.602 { 00:36:27.602 "jsonrpc": "2.0", 00:36:27.602 "id": 1, 00:36:27.602 "result": true 00:36:27.602 } 00:36:27.602 00:36:27.602 INFO: response: 00:36:27.602 { 00:36:27.602 "jsonrpc": "2.0", 00:36:27.602 "id": 1, 00:36:27.602 "result": true 00:36:27.602 } 00:36:27.602 00:36:27.602 01:01:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.602 01:01:39 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:27.602 01:01:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.602 01:01:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:27.602 INFO: Setting log level to 40 00:36:27.602 INFO: Setting log level to 40 00:36:27.602 INFO: Setting log level to 40 00:36:27.602 [2024-07-13 01:01:39.120618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:27.602 01:01:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.602 01:01:39 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:27.602 01:01:39 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:27.602 01:01:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:27.861 01:01:39 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:36:27.861 01:01:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.861 01:01:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.156 Nvme0n1 00:36:31.156 01:01:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.156 01:01:41 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:31.156 01:01:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.156 01:01:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.156 01:01:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.156 01:01:41 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:31.156 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.156 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.156 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.156 01:01:42 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:31.156 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.156 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.156 [2024-07-13 01:01:42.016630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:31.156 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.156 01:01:42 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:31.156 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.156 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.156 [ 00:36:31.156 { 00:36:31.156 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:31.156 "subtype": "Discovery", 00:36:31.156 "listen_addresses": [], 00:36:31.156 "allow_any_host": true, 00:36:31.156 "hosts": [] 00:36:31.156 }, 00:36:31.156 { 00:36:31.156 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:31.156 "subtype": "NVMe", 00:36:31.156 "listen_addresses": [ 00:36:31.156 { 00:36:31.156 "trtype": "TCP", 00:36:31.156 "adrfam": "IPv4", 00:36:31.156 "traddr": "10.0.0.2", 00:36:31.156 "trsvcid": "4420" 00:36:31.156 } 00:36:31.156 ], 00:36:31.156 "allow_any_host": true, 00:36:31.156 "hosts": [], 00:36:31.156 "serial_number": "SPDK00000000000001", 00:36:31.156 "model_number": "SPDK bdev Controller", 00:36:31.156 "max_namespaces": 1, 00:36:31.156 "min_cntlid": 1, 00:36:31.156 "max_cntlid": 65519, 00:36:31.156 "namespaces": [ 00:36:31.156 { 00:36:31.156 "nsid": 1, 00:36:31.156 "bdev_name": "Nvme0n1", 00:36:31.156 "name": "Nvme0n1", 00:36:31.156 "nguid": "4742DC327C7A4842BFB6691DC5E963D1", 00:36:31.156 "uuid": "4742dc32-7c7a-4842-bfb6-691dc5e963d1" 00:36:31.156 } 00:36:31.156 ] 00:36:31.156 } 00:36:31.156 ] 00:36:31.156 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.156 01:01:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:31.156 01:01:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:31.156 01:01:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:31.156 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.156 01:01:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:36:31.156 01:01:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:31.156 01:01:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:31.156 01:01:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:31.156 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.156 01:01:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:31.156 01:01:42 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:36:31.156 01:01:42 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:31.156 01:01:42 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:31.156 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.156 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.156 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.156 01:01:42 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:31.156 01:01:42 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:31.156 01:01:42 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:31.156 01:01:42 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:36:31.156 01:01:42 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:31.156 01:01:42 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:36:31.156 01:01:42 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:31.156 01:01:42 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:31.156 rmmod nvme_tcp 00:36:31.156 rmmod nvme_fabrics 00:36:31.156 rmmod nvme_keyring 00:36:31.156 01:01:42 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:31.156 01:01:42 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:36:31.156 01:01:42 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:36:31.156 01:01:42 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1629228 ']' 00:36:31.156 01:01:42 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1629228 00:36:31.156 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1629228 ']' 00:36:31.156 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1629228 00:36:31.157 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:36:31.157 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:31.157 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1629228 00:36:31.157 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:31.157 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:31.157 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1629228' 00:36:31.157 killing process with pid 1629228 00:36:31.157 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1629228 00:36:31.157 01:01:42 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1629228 00:36:33.071 01:01:44 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:33.071 01:01:44 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:33.071 01:01:44 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:33.071 01:01:44 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:33.071 01:01:44 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:33.071 01:01:44 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.071 01:01:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:33.071 01:01:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.998 01:01:46 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:34.998 00:36:34.998 real 0m21.609s 00:36:34.998 user 0m27.885s 00:36:34.998 sys 0m5.056s 00:36:34.998 01:01:46 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:34.998 01:01:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:34.998 ************************************ 00:36:34.998 END TEST nvmf_identify_passthru 00:36:34.998 ************************************ 00:36:34.998 01:01:46 -- common/autotest_common.sh@1142 -- # return 0 00:36:34.998 01:01:46 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:34.998 01:01:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:34.998 01:01:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:34.998 01:01:46 -- common/autotest_common.sh@10 -- # set +x 00:36:34.998 ************************************ 00:36:34.998 START TEST nvmf_dif 00:36:34.998 ************************************ 00:36:34.998 01:01:46 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:34.998 * Looking for test storage... 00:36:34.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:34.998 01:01:46 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:34.998 01:01:46 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:34.998 01:01:46 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:34.998 01:01:46 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:34.998 01:01:46 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:34.998 01:01:46 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.998 01:01:46 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.998 01:01:46 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.998 01:01:46 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:34.999 01:01:46 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:34.999 01:01:46 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:34.999 01:01:46 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:34.999 01:01:46 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:34.999 01:01:46 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:34.999 01:01:46 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:34.999 01:01:46 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:34.999 01:01:46 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:34.999 01:01:46 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:36:34.999 01:01:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:40.280 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:40.280 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:40.280 Found net devices under 0000:86:00.0: cvl_0_0 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:40.280 Found net devices under 0000:86:00.1: cvl_0_1 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:40.280 01:01:51 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:40.281 01:01:51 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:40.281 01:01:51 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:40.281 01:01:51 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:40.281 01:01:51 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:40.540 01:01:51 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:40.540 01:01:51 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:40.540 01:01:51 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:40.540 01:01:51 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:40.540 01:01:51 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:40.540 01:01:51 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:40.540 01:01:51 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:40.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:40.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:36:40.540 00:36:40.540 --- 10.0.0.2 ping statistics --- 00:36:40.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.540 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:36:40.540 01:01:51 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:40.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:40.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:36:40.540 00:36:40.540 --- 10.0.0.1 ping statistics --- 00:36:40.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.540 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:36:40.540 01:01:51 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:40.540 01:01:51 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:36:40.540 01:01:51 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:40.540 01:01:51 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:43.072 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:43.072 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:43.330 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:43.330 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:43.330 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:43.330 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:43.330 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:43.330 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:43.330 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:43.330 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:43.330 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:43.330 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:43.330 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:43.330 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:43.330 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:43.330 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:43.330 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:43.330 01:01:54 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:43.330 01:01:54 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:43.330 01:01:54 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:43.330 01:01:54 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:43.330 01:01:54 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:43.330 01:01:54 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:43.330 01:01:54 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:43.330 01:01:54 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:43.330 01:01:54 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:43.330 01:01:54 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:43.330 01:01:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:43.330 01:01:54 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1634679 00:36:43.330 01:01:54 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1634679 00:36:43.330 01:01:54 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:43.330 01:01:54 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1634679 ']' 00:36:43.330 01:01:54 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:43.330 01:01:54 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:43.330 01:01:54 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:43.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:43.330 01:01:54 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:43.330 01:01:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:43.589 [2024-07-13 01:01:54.901663] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:36:43.589 [2024-07-13 01:01:54.901707] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:43.589 EAL: No free 2048 kB hugepages reported on node 1 00:36:43.589 [2024-07-13 01:01:54.972209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.589 [2024-07-13 01:01:55.010551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:43.589 [2024-07-13 01:01:55.010591] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:43.589 [2024-07-13 01:01:55.010597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:43.589 [2024-07-13 01:01:55.010603] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:43.589 [2024-07-13 01:01:55.010608] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:43.589 [2024-07-13 01:01:55.010625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.156 01:01:55 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:44.156 01:01:55 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:36:44.156 01:01:55 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:44.156 01:01:55 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:44.156 01:01:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:44.417 01:01:55 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:44.417 01:01:55 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:44.417 01:01:55 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:44.417 01:01:55 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.417 01:01:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:44.417 [2024-07-13 01:01:55.748162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:44.417 01:01:55 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.417 01:01:55 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:44.417 01:01:55 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:44.417 01:01:55 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:44.417 01:01:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:44.417 ************************************ 00:36:44.417 START TEST fio_dif_1_default 00:36:44.417 ************************************ 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:44.417 bdev_null0 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:44.417 [2024-07-13 01:01:55.820458] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:44.417 { 00:36:44.417 "params": { 00:36:44.417 "name": "Nvme$subsystem", 00:36:44.417 "trtype": "$TEST_TRANSPORT", 00:36:44.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:44.417 "adrfam": "ipv4", 00:36:44.417 "trsvcid": "$NVMF_PORT", 00:36:44.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:44.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:44.417 "hdgst": ${hdgst:-false}, 00:36:44.417 "ddgst": ${ddgst:-false} 00:36:44.417 }, 00:36:44.417 "method": "bdev_nvme_attach_controller" 00:36:44.417 } 00:36:44.417 EOF 00:36:44.417 )") 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:44.417 "params": { 00:36:44.417 "name": "Nvme0", 00:36:44.417 "trtype": "tcp", 00:36:44.417 "traddr": "10.0.0.2", 00:36:44.417 "adrfam": "ipv4", 00:36:44.417 "trsvcid": "4420", 00:36:44.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:44.417 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:44.417 "hdgst": false, 00:36:44.417 "ddgst": false 00:36:44.417 }, 00:36:44.417 "method": "bdev_nvme_attach_controller" 00:36:44.417 }' 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:44.417 01:01:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:44.676 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:44.676 fio-3.35 00:36:44.676 Starting 1 thread 00:36:44.676 EAL: No free 2048 kB hugepages reported on node 1 00:36:56.885 00:36:56.885 filename0: (groupid=0, jobs=1): err= 0: pid=1635059: Sat Jul 13 01:02:06 2024 00:36:56.885 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:36:56.885 slat (nsec): min=5647, max=25182, avg=6289.79, stdev=1134.48 00:36:56.885 clat (usec): min=40864, max=45543, avg=41009.83, stdev=304.89 00:36:56.885 lat (usec): min=40870, max=45568, avg=41016.12, stdev=305.35 00:36:56.885 clat percentiles (usec): 00:36:56.885 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:56.885 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:56.885 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:56.885 | 99.00th=[41681], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:36:56.885 | 99.99th=[45351] 00:36:56.885 bw ( KiB/s): min= 384, max= 416, per=99.49%, avg=388.80, stdev=11.72, samples=20 00:36:56.885 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:36:56.885 lat (msec) : 50=100.00% 00:36:56.885 cpu : usr=94.72%, sys=5.03%, ctx=8, majf=0, minf=205 00:36:56.885 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:56.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.885 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:56.885 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:56.885 00:36:56.885 Run status group 0 (all jobs): 00:36:56.885 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10011-10011msec 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.885 00:36:56.885 real 0m11.005s 00:36:56.885 user 0m16.001s 00:36:56.885 sys 0m0.777s 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:56.885 ************************************ 00:36:56.885 END TEST fio_dif_1_default 00:36:56.885 ************************************ 00:36:56.885 01:02:06 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:56.885 01:02:06 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:56.885 01:02:06 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:56.885 01:02:06 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:56.885 01:02:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:56.885 ************************************ 00:36:56.885 START TEST fio_dif_1_multi_subsystems 00:36:56.885 ************************************ 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:56.885 bdev_null0 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:56.885 [2024-07-13 01:02:06.902601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:56.885 bdev_null1 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:56.885 { 00:36:56.885 "params": { 00:36:56.885 "name": "Nvme$subsystem", 00:36:56.885 "trtype": "$TEST_TRANSPORT", 00:36:56.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:56.885 "adrfam": "ipv4", 00:36:56.885 "trsvcid": "$NVMF_PORT", 00:36:56.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:56.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:56.885 "hdgst": ${hdgst:-false}, 00:36:56.885 "ddgst": ${ddgst:-false} 00:36:56.885 }, 00:36:56.885 "method": "bdev_nvme_attach_controller" 00:36:56.885 } 00:36:56.885 EOF 00:36:56.885 )") 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:56.885 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:56.886 { 00:36:56.886 "params": { 00:36:56.886 "name": "Nvme$subsystem", 00:36:56.886 "trtype": "$TEST_TRANSPORT", 00:36:56.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:56.886 "adrfam": "ipv4", 00:36:56.886 "trsvcid": "$NVMF_PORT", 00:36:56.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:56.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:56.886 "hdgst": ${hdgst:-false}, 00:36:56.886 "ddgst": ${ddgst:-false} 00:36:56.886 }, 00:36:56.886 "method": "bdev_nvme_attach_controller" 00:36:56.886 } 00:36:56.886 EOF 00:36:56.886 )") 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:56.886 "params": { 00:36:56.886 "name": "Nvme0", 00:36:56.886 "trtype": "tcp", 00:36:56.886 "traddr": "10.0.0.2", 00:36:56.886 "adrfam": "ipv4", 00:36:56.886 "trsvcid": "4420", 00:36:56.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:56.886 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:56.886 "hdgst": false, 00:36:56.886 "ddgst": false 00:36:56.886 }, 00:36:56.886 "method": "bdev_nvme_attach_controller" 00:36:56.886 },{ 00:36:56.886 "params": { 00:36:56.886 "name": "Nvme1", 00:36:56.886 "trtype": "tcp", 00:36:56.886 "traddr": "10.0.0.2", 00:36:56.886 "adrfam": "ipv4", 00:36:56.886 "trsvcid": "4420", 00:36:56.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:56.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:56.886 "hdgst": false, 00:36:56.886 "ddgst": false 00:36:56.886 }, 00:36:56.886 "method": "bdev_nvme_attach_controller" 00:36:56.886 }' 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:56.886 01:02:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:56.886 01:02:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:56.886 01:02:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:56.886 01:02:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:56.886 01:02:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:56.886 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:56.886 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:56.886 fio-3.35 00:36:56.886 Starting 2 threads 00:36:56.886 EAL: No free 2048 kB hugepages reported on node 1 00:37:06.863 00:37:06.863 filename0: (groupid=0, jobs=1): err= 0: pid=1637025: Sat Jul 13 01:02:18 2024 00:37:06.863 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:37:06.863 slat (nsec): min=6069, max=24423, avg=7886.96, stdev=2678.30 00:37:06.863 clat (usec): min=40827, max=42039, avg=41002.07, stdev=164.28 00:37:06.863 lat (usec): min=40833, max=42050, avg=41009.96, stdev=164.46 00:37:06.863 clat percentiles (usec): 00:37:06.863 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:06.863 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:06.863 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:06.863 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:06.863 | 99.99th=[42206] 00:37:06.863 bw ( KiB/s): min= 384, max= 416, per=33.76%, avg=388.80, stdev=11.72, samples=20 00:37:06.863 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:37:06.863 lat (msec) : 50=100.00% 00:37:06.863 cpu : usr=97.42%, sys=2.32%, ctx=11, majf=0, minf=98 00:37:06.863 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:06.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.863 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.863 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:06.863 filename1: (groupid=0, jobs=1): err= 0: pid=1637026: Sat Jul 13 01:02:18 2024 00:37:06.863 read: IOPS=189, BW=759KiB/s (777kB/s)(7600KiB/10011msec) 00:37:06.863 slat (nsec): min=6028, max=39456, avg=7199.58, stdev=2219.59 00:37:06.863 clat (usec): min=415, max=42543, avg=21053.65, stdev=20516.74 00:37:06.863 lat (usec): min=421, max=42550, avg=21060.85, stdev=20516.05 00:37:06.863 clat percentiles (usec): 00:37:06.863 | 1.00th=[ 433], 5.00th=[ 441], 10.00th=[ 453], 20.00th=[ 469], 00:37:06.863 | 30.00th=[ 482], 40.00th=[ 570], 50.00th=[40633], 60.00th=[41157], 00:37:06.863 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:37:06.863 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:37:06.863 | 99.99th=[42730] 00:37:06.863 bw ( KiB/s): min= 704, max= 768, per=65.96%, avg=758.40, stdev=23.45, samples=20 00:37:06.863 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:37:06.863 lat (usec) : 500=36.26%, 750=13.26%, 1000=0.37% 00:37:06.863 lat (msec) : 50=50.11% 00:37:06.863 cpu : usr=97.43%, sys=2.31%, ctx=11, majf=0, minf=175 00:37:06.863 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:06.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.863 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.863 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:06.863 00:37:06.863 Run status group 0 (all jobs): 00:37:06.863 READ: bw=1149KiB/s (1177kB/s), 390KiB/s-759KiB/s (399kB/s-777kB/s), io=11.2MiB (11.8MB), run=10011-10011msec 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:06.863 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.864 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:06.864 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.864 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:06.864 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.864 00:37:06.864 real 0m11.503s 00:37:06.864 user 0m26.212s 00:37:06.864 sys 0m0.784s 00:37:06.864 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:06.864 01:02:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:06.864 ************************************ 00:37:06.864 END TEST fio_dif_1_multi_subsystems 00:37:06.864 ************************************ 00:37:06.864 01:02:18 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:37:06.864 01:02:18 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:06.864 01:02:18 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:06.864 01:02:18 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:06.864 01:02:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:07.122 ************************************ 00:37:07.122 START TEST fio_dif_rand_params 00:37:07.122 ************************************ 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:07.123 bdev_null0 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:07.123 [2024-07-13 01:02:18.472266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:07.123 { 00:37:07.123 "params": { 00:37:07.123 "name": "Nvme$subsystem", 00:37:07.123 "trtype": "$TEST_TRANSPORT", 00:37:07.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:07.123 "adrfam": "ipv4", 00:37:07.123 "trsvcid": "$NVMF_PORT", 00:37:07.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:07.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:07.123 "hdgst": ${hdgst:-false}, 00:37:07.123 "ddgst": ${ddgst:-false} 00:37:07.123 }, 00:37:07.123 "method": "bdev_nvme_attach_controller" 00:37:07.123 } 00:37:07.123 EOF 00:37:07.123 )") 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:07.123 "params": { 00:37:07.123 "name": "Nvme0", 00:37:07.123 "trtype": "tcp", 00:37:07.123 "traddr": "10.0.0.2", 00:37:07.123 "adrfam": "ipv4", 00:37:07.123 "trsvcid": "4420", 00:37:07.123 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:07.123 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:07.123 "hdgst": false, 00:37:07.123 "ddgst": false 00:37:07.123 }, 00:37:07.123 "method": "bdev_nvme_attach_controller" 00:37:07.123 }' 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:07.123 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:07.124 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:07.124 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:07.124 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:07.124 01:02:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:07.383 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:07.383 ... 00:37:07.383 fio-3.35 00:37:07.383 Starting 3 threads 00:37:07.383 EAL: No free 2048 kB hugepages reported on node 1 00:37:13.990 00:37:13.990 filename0: (groupid=0, jobs=1): err= 0: pid=1638960: Sat Jul 13 01:02:24 2024 00:37:13.990 read: IOPS=308, BW=38.6MiB/s (40.5MB/s)(194MiB/5018msec) 00:37:13.990 slat (nsec): min=6285, max=31568, avg=11160.18, stdev=2297.25 00:37:13.991 clat (usec): min=3218, max=50902, avg=9697.51, stdev=7363.47 00:37:13.991 lat (usec): min=3224, max=50915, avg=9708.68, stdev=7363.37 00:37:13.991 clat percentiles (usec): 00:37:13.991 | 1.00th=[ 3752], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 6849], 00:37:13.991 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 8979], 00:37:13.991 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11600], 00:37:13.991 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50594], 99.95th=[51119], 00:37:13.991 | 99.99th=[51119] 00:37:13.991 bw ( KiB/s): min=34048, max=49920, per=34.53%, avg=39603.20, stdev=4667.74, samples=10 00:37:13.991 iops : min= 266, max= 390, avg=309.40, stdev=36.47, samples=10 00:37:13.991 lat (msec) : 4=2.06%, 10=78.97%, 20=15.48%, 50=3.16%, 100=0.32% 00:37:13.991 cpu : usr=93.96%, sys=5.74%, ctx=10, majf=0, minf=135 00:37:13.991 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:13.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:13.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:13.991 issued rwts: total=1550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:13.991 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:13.991 filename0: (groupid=0, jobs=1): err= 0: pid=1638961: Sat Jul 13 01:02:24 2024 00:37:13.991 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(183MiB/5035msec) 00:37:13.991 slat (nsec): min=6251, max=52901, avg=11192.25, stdev=2548.53 00:37:13.991 clat (usec): min=3378, max=88817, avg=10317.83, stdev=8171.39 00:37:13.991 lat (usec): min=3386, max=88831, avg=10329.02, stdev=8171.35 00:37:13.991 clat percentiles (usec): 00:37:13.991 | 1.00th=[ 3884], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 6849], 00:37:13.991 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9634], 00:37:13.991 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11338], 95.00th=[12256], 00:37:13.991 | 99.00th=[50070], 99.50th=[50594], 99.90th=[52167], 99.95th=[88605], 00:37:13.991 | 99.99th=[88605] 00:37:13.991 bw ( KiB/s): min=14592, max=45824, per=32.57%, avg=37350.40, stdev=8890.61, samples=10 00:37:13.991 iops : min= 114, max= 358, avg=291.80, stdev=69.46, samples=10 00:37:13.991 lat (msec) : 4=1.71%, 10=67.31%, 20=26.95%, 50=3.08%, 100=0.96% 00:37:13.991 cpu : usr=94.50%, sys=5.20%, ctx=12, majf=0, minf=166 00:37:13.991 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:13.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:13.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:13.991 issued rwts: total=1462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:13.991 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:13.991 filename0: (groupid=0, jobs=1): err= 0: pid=1638962: Sat Jul 13 01:02:24 2024 00:37:13.991 read: IOPS=299, BW=37.5MiB/s (39.3MB/s)(187MiB/5002msec) 00:37:13.991 slat (nsec): min=6244, max=33423, avg=10927.93, stdev=2407.58 00:37:13.991 clat (usec): min=3290, max=54973, avg=9997.03, stdev=7673.42 00:37:13.991 lat (usec): min=3297, max=54980, avg=10007.96, stdev=7673.48 00:37:13.991 clat percentiles (usec): 00:37:13.991 | 1.00th=[ 3720], 5.00th=[ 4883], 10.00th=[ 5866], 20.00th=[ 6652], 00:37:13.991 | 30.00th=[ 7570], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9503], 00:37:13.991 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[11338], 95.00th=[12125], 00:37:13.991 | 99.00th=[49546], 99.50th=[50594], 99.90th=[54264], 99.95th=[54789], 00:37:13.991 | 99.99th=[54789] 00:37:13.991 bw ( KiB/s): min=30464, max=44288, per=33.42%, avg=38323.20, stdev=5203.30, samples=10 00:37:13.991 iops : min= 238, max= 346, avg=299.40, stdev=40.65, samples=10 00:37:13.991 lat (msec) : 4=3.60%, 10=67.11%, 20=25.68%, 50=2.87%, 100=0.73% 00:37:13.991 cpu : usr=94.30%, sys=5.40%, ctx=9, majf=0, minf=64 00:37:13.991 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:13.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:13.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:13.991 issued rwts: total=1499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:13.991 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:13.991 00:37:13.991 Run status group 0 (all jobs): 00:37:13.991 READ: bw=112MiB/s (117MB/s), 36.3MiB/s-38.6MiB/s (38.1MB/s-40.5MB/s), io=564MiB (591MB), run=5002-5035msec 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:13.991 bdev_null0 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:13.991 [2024-07-13 01:02:24.636543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:13.991 bdev_null1 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:13.991 bdev_null2 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.991 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:13.992 { 00:37:13.992 "params": { 00:37:13.992 "name": "Nvme$subsystem", 00:37:13.992 "trtype": "$TEST_TRANSPORT", 00:37:13.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:13.992 "adrfam": "ipv4", 00:37:13.992 "trsvcid": "$NVMF_PORT", 00:37:13.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:13.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:13.992 "hdgst": ${hdgst:-false}, 00:37:13.992 "ddgst": ${ddgst:-false} 00:37:13.992 }, 00:37:13.992 "method": "bdev_nvme_attach_controller" 00:37:13.992 } 00:37:13.992 EOF 00:37:13.992 )") 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:13.992 { 00:37:13.992 "params": { 00:37:13.992 "name": "Nvme$subsystem", 00:37:13.992 "trtype": "$TEST_TRANSPORT", 00:37:13.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:13.992 "adrfam": "ipv4", 00:37:13.992 "trsvcid": "$NVMF_PORT", 00:37:13.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:13.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:13.992 "hdgst": ${hdgst:-false}, 00:37:13.992 "ddgst": ${ddgst:-false} 00:37:13.992 }, 00:37:13.992 "method": "bdev_nvme_attach_controller" 00:37:13.992 } 00:37:13.992 EOF 00:37:13.992 )") 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:13.992 { 00:37:13.992 "params": { 00:37:13.992 "name": "Nvme$subsystem", 00:37:13.992 "trtype": "$TEST_TRANSPORT", 00:37:13.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:13.992 "adrfam": "ipv4", 00:37:13.992 "trsvcid": "$NVMF_PORT", 00:37:13.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:13.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:13.992 "hdgst": ${hdgst:-false}, 00:37:13.992 "ddgst": ${ddgst:-false} 00:37:13.992 }, 00:37:13.992 "method": "bdev_nvme_attach_controller" 00:37:13.992 } 00:37:13.992 EOF 00:37:13.992 )") 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:13.992 "params": { 00:37:13.992 "name": "Nvme0", 00:37:13.992 "trtype": "tcp", 00:37:13.992 "traddr": "10.0.0.2", 00:37:13.992 "adrfam": "ipv4", 00:37:13.992 "trsvcid": "4420", 00:37:13.992 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:13.992 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:13.992 "hdgst": false, 00:37:13.992 "ddgst": false 00:37:13.992 }, 00:37:13.992 "method": "bdev_nvme_attach_controller" 00:37:13.992 },{ 00:37:13.992 "params": { 00:37:13.992 "name": "Nvme1", 00:37:13.992 "trtype": "tcp", 00:37:13.992 "traddr": "10.0.0.2", 00:37:13.992 "adrfam": "ipv4", 00:37:13.992 "trsvcid": "4420", 00:37:13.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:13.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:13.992 "hdgst": false, 00:37:13.992 "ddgst": false 00:37:13.992 }, 00:37:13.992 "method": "bdev_nvme_attach_controller" 00:37:13.992 },{ 00:37:13.992 "params": { 00:37:13.992 "name": "Nvme2", 00:37:13.992 "trtype": "tcp", 00:37:13.992 "traddr": "10.0.0.2", 00:37:13.992 "adrfam": "ipv4", 00:37:13.992 "trsvcid": "4420", 00:37:13.992 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:13.992 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:13.992 "hdgst": false, 00:37:13.992 "ddgst": false 00:37:13.992 }, 00:37:13.992 "method": "bdev_nvme_attach_controller" 00:37:13.992 }' 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:13.992 01:02:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:13.992 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:13.992 ... 00:37:13.992 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:13.992 ... 00:37:13.992 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:13.992 ... 00:37:13.992 fio-3.35 00:37:13.992 Starting 24 threads 00:37:13.992 EAL: No free 2048 kB hugepages reported on node 1 00:37:26.188 00:37:26.188 filename0: (groupid=0, jobs=1): err= 0: pid=1640037: Sat Jul 13 01:02:36 2024 00:37:26.188 read: IOPS=684, BW=2737KiB/s (2803kB/s)(26.8MiB/10013msec) 00:37:26.188 slat (nsec): min=6265, max=75013, avg=11293.13, stdev=8212.39 00:37:26.188 clat (usec): min=688, max=31147, avg=23290.03, stdev=5820.30 00:37:26.188 lat (usec): min=694, max=31153, avg=23301.33, stdev=5820.88 00:37:26.188 clat percentiles (usec): 00:37:26.188 | 1.00th=[ 1074], 5.00th=[13173], 10.00th=[16057], 20.00th=[24511], 00:37:26.188 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25297], 00:37:26.188 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26346], 95.00th=[27395], 00:37:26.188 | 99.00th=[27919], 99.50th=[28181], 99.90th=[28443], 99.95th=[31065], 00:37:26.188 | 99.99th=[31065] 00:37:26.188 bw ( KiB/s): min= 2304, max= 6000, per=4.59%, avg=2734.40, stdev=825.38, samples=20 00:37:26.188 iops : min= 576, max= 1500, avg=683.60, stdev=206.35, samples=20 00:37:26.188 lat (usec) : 750=0.12%, 1000=0.16% 00:37:26.188 lat (msec) : 2=4.01%, 4=0.25%, 10=0.23%, 20=13.47%, 50=81.76% 00:37:26.189 cpu : usr=98.99%, sys=0.63%, ctx=11, majf=0, minf=48 00:37:26.189 IO depths : 1=5.0%, 2=10.1%, 4=21.2%, 8=55.9%, 16=7.6%, 32=0.0%, >=64=0.0% 00:37:26.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 issued rwts: total=6852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.189 filename0: (groupid=0, jobs=1): err= 0: pid=1640038: Sat Jul 13 01:02:36 2024 00:37:26.189 read: IOPS=620, BW=2482KiB/s (2542kB/s)(24.2MiB/10003msec) 00:37:26.189 slat (nsec): min=6241, max=94698, avg=48429.73, stdev=17825.50 00:37:26.189 clat (usec): min=8904, max=50873, avg=25363.36, stdev=1749.17 00:37:26.189 lat (usec): min=8916, max=50897, avg=25411.79, stdev=1747.97 00:37:26.189 clat percentiles (usec): 00:37:26.189 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:37:26.189 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:26.189 | 70.00th=[25297], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.189 | 99.00th=[27919], 99.50th=[27919], 99.90th=[50594], 99.95th=[50594], 00:37:26.189 | 99.99th=[51119] 00:37:26.189 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2472.63, stdev=85.46, samples=19 00:37:26.189 iops : min= 576, max= 640, avg=618.16, stdev=21.37, samples=19 00:37:26.189 lat (msec) : 10=0.26%, 20=0.26%, 50=99.23%, 100=0.26% 00:37:26.189 cpu : usr=99.12%, sys=0.44%, ctx=51, majf=0, minf=23 00:37:26.189 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:26.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.189 filename0: (groupid=0, jobs=1): err= 0: pid=1640039: Sat Jul 13 01:02:36 2024 00:37:26.189 read: IOPS=621, BW=2485KiB/s (2545kB/s)(24.3MiB/10008msec) 00:37:26.189 slat (nsec): min=6127, max=84830, avg=36344.62, stdev=16845.16 00:37:26.189 clat (usec): min=7495, max=52812, avg=25442.76, stdev=1643.31 00:37:26.189 lat (usec): min=7510, max=52829, avg=25479.10, stdev=1643.76 00:37:26.189 clat percentiles (usec): 00:37:26.189 | 1.00th=[23987], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:37:26.189 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:26.189 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.189 | 99.00th=[28181], 99.50th=[28181], 99.90th=[38011], 99.95th=[38011], 00:37:26.189 | 99.99th=[52691] 00:37:26.189 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2479.16, stdev=84.07, samples=19 00:37:26.189 iops : min= 576, max= 640, avg=619.79, stdev=21.02, samples=19 00:37:26.189 lat (msec) : 10=0.16%, 20=0.51%, 50=99.29%, 100=0.03% 00:37:26.189 cpu : usr=98.69%, sys=0.75%, ctx=55, majf=0, minf=22 00:37:26.189 IO depths : 1=2.7%, 2=8.9%, 4=24.9%, 8=53.7%, 16=9.9%, 32=0.0%, >=64=0.0% 00:37:26.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 issued rwts: total=6218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.189 filename0: (groupid=0, jobs=1): err= 0: pid=1640040: Sat Jul 13 01:02:36 2024 00:37:26.189 read: IOPS=619, BW=2479KiB/s (2538kB/s)(24.2MiB/10017msec) 00:37:26.189 slat (nsec): min=7310, max=86787, avg=34443.25, stdev=19899.62 00:37:26.189 clat (usec): min=21708, max=47246, avg=25467.77, stdev=1338.62 00:37:26.189 lat (usec): min=21723, max=47286, avg=25502.21, stdev=1339.96 00:37:26.189 clat percentiles (usec): 00:37:26.189 | 1.00th=[23987], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:37:26.189 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:26.189 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.189 | 99.00th=[27919], 99.50th=[27919], 99.90th=[46924], 99.95th=[46924], 00:37:26.189 | 99.99th=[47449] 00:37:26.189 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2476.80, stdev=94.28, samples=20 00:37:26.189 iops : min= 576, max= 640, avg=619.20, stdev=23.31, samples=20 00:37:26.189 lat (msec) : 50=100.00% 00:37:26.189 cpu : usr=98.80%, sys=0.68%, ctx=70, majf=0, minf=32 00:37:26.189 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:26.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.189 filename0: (groupid=0, jobs=1): err= 0: pid=1640041: Sat Jul 13 01:02:36 2024 00:37:26.189 read: IOPS=619, BW=2480KiB/s (2539kB/s)(24.2MiB/10013msec) 00:37:26.189 slat (nsec): min=8804, max=94419, avg=43303.04, stdev=18234.91 00:37:26.189 clat (usec): min=17525, max=52840, avg=25452.50, stdev=1407.41 00:37:26.189 lat (usec): min=17560, max=52865, avg=25495.80, stdev=1404.99 00:37:26.189 clat percentiles (usec): 00:37:26.189 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:37:26.189 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:26.189 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.189 | 99.00th=[27919], 99.50th=[27919], 99.90th=[46400], 99.95th=[46400], 00:37:26.189 | 99.99th=[52691] 00:37:26.189 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2472.42, stdev=85.89, samples=19 00:37:26.189 iops : min= 576, max= 640, avg=618.11, stdev=21.47, samples=19 00:37:26.189 lat (msec) : 20=0.29%, 50=99.68%, 100=0.03% 00:37:26.189 cpu : usr=98.51%, sys=0.85%, ctx=68, majf=0, minf=20 00:37:26.189 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:26.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.189 filename0: (groupid=0, jobs=1): err= 0: pid=1640042: Sat Jul 13 01:02:36 2024 00:37:26.189 read: IOPS=619, BW=2479KiB/s (2538kB/s)(24.2MiB/10017msec) 00:37:26.189 slat (nsec): min=6804, max=83376, avg=29713.13, stdev=15509.97 00:37:26.189 clat (usec): min=21738, max=47479, avg=25543.98, stdev=1342.11 00:37:26.189 lat (usec): min=21750, max=47507, avg=25573.69, stdev=1342.53 00:37:26.189 clat percentiles (usec): 00:37:26.189 | 1.00th=[23987], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:37:26.189 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25297], 00:37:26.189 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.189 | 99.00th=[27919], 99.50th=[28181], 99.90th=[47449], 99.95th=[47449], 00:37:26.189 | 99.99th=[47449] 00:37:26.189 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2476.80, stdev=84.72, samples=20 00:37:26.189 iops : min= 576, max= 640, avg=618.90, stdev=21.36, samples=20 00:37:26.189 lat (msec) : 50=100.00% 00:37:26.189 cpu : usr=98.83%, sys=0.68%, ctx=84, majf=0, minf=23 00:37:26.189 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:26.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.189 filename0: (groupid=0, jobs=1): err= 0: pid=1640043: Sat Jul 13 01:02:36 2024 00:37:26.189 read: IOPS=619, BW=2477KiB/s (2536kB/s)(24.2MiB/10001msec) 00:37:26.189 slat (nsec): min=6889, max=92020, avg=22515.45, stdev=14711.71 00:37:26.189 clat (usec): min=17590, max=59008, avg=25681.07, stdev=1635.25 00:37:26.189 lat (usec): min=17598, max=59049, avg=25703.59, stdev=1634.17 00:37:26.189 clat percentiles (usec): 00:37:26.189 | 1.00th=[24511], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:37:26.189 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:37:26.189 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27657], 00:37:26.189 | 99.00th=[28181], 99.50th=[28181], 99.90th=[52691], 99.95th=[52691], 00:37:26.189 | 99.99th=[58983] 00:37:26.189 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2472.42, stdev=84.73, samples=19 00:37:26.189 iops : min= 576, max= 640, avg=618.11, stdev=21.18, samples=19 00:37:26.189 lat (msec) : 20=0.19%, 50=99.55%, 100=0.26% 00:37:26.189 cpu : usr=98.26%, sys=1.04%, ctx=58, majf=0, minf=34 00:37:26.189 IO depths : 1=2.8%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.7%, 32=0.0%, >=64=0.0% 00:37:26.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.189 filename0: (groupid=0, jobs=1): err= 0: pid=1640044: Sat Jul 13 01:02:36 2024 00:37:26.189 read: IOPS=620, BW=2481KiB/s (2541kB/s)(24.2MiB/10008msec) 00:37:26.189 slat (nsec): min=6796, max=81602, avg=36341.94, stdev=20064.44 00:37:26.189 clat (usec): min=19198, max=40073, avg=25477.28, stdev=1118.66 00:37:26.189 lat (usec): min=19218, max=40099, avg=25513.62, stdev=1116.49 00:37:26.189 clat percentiles (usec): 00:37:26.189 | 1.00th=[23987], 5.00th=[24773], 10.00th=[24773], 20.00th=[24773], 00:37:26.189 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:26.189 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.189 | 99.00th=[28181], 99.50th=[28181], 99.90th=[40109], 99.95th=[40109], 00:37:26.189 | 99.99th=[40109] 00:37:26.189 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2479.16, stdev=76.45, samples=19 00:37:26.189 iops : min= 576, max= 640, avg=619.79, stdev=19.11, samples=19 00:37:26.189 lat (msec) : 20=0.26%, 50=99.74% 00:37:26.189 cpu : usr=98.91%, sys=0.59%, ctx=59, majf=0, minf=27 00:37:26.189 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:26.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.189 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.189 filename1: (groupid=0, jobs=1): err= 0: pid=1640045: Sat Jul 13 01:02:36 2024 00:37:26.189 read: IOPS=620, BW=2482KiB/s (2542kB/s)(24.2MiB/10004msec) 00:37:26.189 slat (nsec): min=8569, max=83969, avg=45775.82, stdev=14246.14 00:37:26.189 clat (usec): min=9108, max=57483, avg=25381.60, stdev=1795.49 00:37:26.189 lat (usec): min=9137, max=57508, avg=25427.38, stdev=1795.04 00:37:26.189 clat percentiles (usec): 00:37:26.189 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:37:26.189 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:26.189 | 70.00th=[25297], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.189 | 99.00th=[27919], 99.50th=[27919], 99.90th=[51119], 99.95th=[51119], 00:37:26.189 | 99.99th=[57410] 00:37:26.189 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2472.42, stdev=85.89, samples=19 00:37:26.189 iops : min= 576, max= 640, avg=618.11, stdev=21.47, samples=19 00:37:26.189 lat (msec) : 10=0.26%, 20=0.29%, 50=99.19%, 100=0.26% 00:37:26.189 cpu : usr=99.14%, sys=0.45%, ctx=16, majf=0, minf=20 00:37:26.190 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:26.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.190 filename1: (groupid=0, jobs=1): err= 0: pid=1640046: Sat Jul 13 01:02:36 2024 00:37:26.190 read: IOPS=620, BW=2483KiB/s (2542kB/s)(24.2MiB/10002msec) 00:37:26.190 slat (nsec): min=6469, max=89524, avg=44181.35, stdev=16317.18 00:37:26.190 clat (usec): min=17690, max=35315, avg=25417.23, stdev=977.41 00:37:26.190 lat (usec): min=17742, max=35332, avg=25461.41, stdev=975.87 00:37:26.190 clat percentiles (usec): 00:37:26.190 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:37:26.190 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:26.190 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.190 | 99.00th=[27919], 99.50th=[28181], 99.90th=[35390], 99.95th=[35390], 00:37:26.190 | 99.99th=[35390] 00:37:26.190 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2479.16, stdev=87.55, samples=19 00:37:26.190 iops : min= 576, max= 640, avg=619.79, stdev=21.89, samples=19 00:37:26.190 lat (msec) : 20=0.26%, 50=99.74% 00:37:26.190 cpu : usr=98.82%, sys=0.75%, ctx=51, majf=0, minf=21 00:37:26.190 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:26.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.190 filename1: (groupid=0, jobs=1): err= 0: pid=1640047: Sat Jul 13 01:02:36 2024 00:37:26.190 read: IOPS=620, BW=2482KiB/s (2541kB/s)(24.2MiB/10002msec) 00:37:26.190 slat (nsec): min=4700, max=88637, avg=47822.79, stdev=13886.54 00:37:26.190 clat (usec): min=8943, max=62957, avg=25361.37, stdev=1783.13 00:37:26.190 lat (usec): min=9008, max=62996, avg=25409.19, stdev=1783.33 00:37:26.190 clat percentiles (usec): 00:37:26.190 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:37:26.190 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:26.190 | 70.00th=[25297], 80.00th=[25560], 90.00th=[26346], 95.00th=[27395], 00:37:26.190 | 99.00th=[27919], 99.50th=[28181], 99.90th=[49546], 99.95th=[49546], 00:37:26.190 | 99.99th=[63177] 00:37:26.190 bw ( KiB/s): min= 2288, max= 2560, per=4.15%, avg=2471.32, stdev=97.64, samples=19 00:37:26.190 iops : min= 572, max= 640, avg=617.79, stdev=24.43, samples=19 00:37:26.190 lat (msec) : 10=0.26%, 20=0.29%, 50=99.42%, 100=0.03% 00:37:26.190 cpu : usr=98.98%, sys=0.64%, ctx=22, majf=0, minf=24 00:37:26.190 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:26.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 issued rwts: total=6206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.190 filename1: (groupid=0, jobs=1): err= 0: pid=1640048: Sat Jul 13 01:02:36 2024 00:37:26.190 read: IOPS=620, BW=2481KiB/s (2541kB/s)(24.2MiB/10008msec) 00:37:26.190 slat (nsec): min=6338, max=87814, avg=42362.57, stdev=18093.78 00:37:26.190 clat (usec): min=18401, max=40920, avg=25387.27, stdev=1140.55 00:37:26.190 lat (usec): min=18467, max=40939, avg=25429.63, stdev=1141.22 00:37:26.190 clat percentiles (usec): 00:37:26.190 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:37:26.190 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:26.190 | 70.00th=[25297], 80.00th=[25560], 90.00th=[26346], 95.00th=[27395], 00:37:26.190 | 99.00th=[27919], 99.50th=[28181], 99.90th=[40633], 99.95th=[40633], 00:37:26.190 | 99.99th=[41157] 00:37:26.190 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2479.16, stdev=87.55, samples=19 00:37:26.190 iops : min= 576, max= 640, avg=619.79, stdev=21.89, samples=19 00:37:26.190 lat (msec) : 20=0.26%, 50=99.74% 00:37:26.190 cpu : usr=97.85%, sys=1.07%, ctx=170, majf=0, minf=21 00:37:26.190 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:26.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.190 filename1: (groupid=0, jobs=1): err= 0: pid=1640049: Sat Jul 13 01:02:36 2024 00:37:26.190 read: IOPS=623, BW=2495KiB/s (2555kB/s)(24.5MiB/10044msec) 00:37:26.190 slat (nsec): min=6325, max=94476, avg=35386.11, stdev=23639.23 00:37:26.190 clat (usec): min=10393, max=51378, avg=25339.48, stdev=2964.24 00:37:26.190 lat (usec): min=10429, max=51397, avg=25374.87, stdev=2964.27 00:37:26.190 clat percentiles (usec): 00:37:26.190 | 1.00th=[16712], 5.00th=[21365], 10.00th=[22676], 20.00th=[24773], 00:37:26.190 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:26.190 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27395], 95.00th=[28967], 00:37:26.190 | 99.00th=[35390], 99.50th=[39060], 99.90th=[51119], 99.95th=[51119], 00:37:26.190 | 99.99th=[51119] 00:37:26.190 bw ( KiB/s): min= 2304, max= 2720, per=4.20%, avg=2501.60, stdev=99.75, samples=20 00:37:26.190 iops : min= 576, max= 680, avg=625.40, stdev=24.94, samples=20 00:37:26.190 lat (msec) : 20=3.08%, 50=96.57%, 100=0.35% 00:37:26.190 cpu : usr=97.91%, sys=1.18%, ctx=162, majf=0, minf=24 00:37:26.190 IO depths : 1=3.1%, 2=6.8%, 4=15.8%, 8=63.3%, 16=11.0%, 32=0.0%, >=64=0.0% 00:37:26.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 complete : 0=0.0%, 4=91.9%, 8=3.9%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 issued rwts: total=6266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.190 filename1: (groupid=0, jobs=1): err= 0: pid=1640050: Sat Jul 13 01:02:36 2024 00:37:26.190 read: IOPS=620, BW=2482KiB/s (2541kB/s)(24.2MiB/10006msec) 00:37:26.190 slat (nsec): min=6393, max=86633, avg=33148.56, stdev=20411.71 00:37:26.190 clat (usec): min=10423, max=49335, avg=25436.51, stdev=1616.93 00:37:26.190 lat (usec): min=10430, max=49353, avg=25469.65, stdev=1618.14 00:37:26.190 clat percentiles (usec): 00:37:26.190 | 1.00th=[23987], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:37:26.190 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:26.190 | 70.00th=[25297], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.190 | 99.00th=[27919], 99.50th=[27919], 99.90th=[49021], 99.95th=[49546], 00:37:26.190 | 99.99th=[49546] 00:37:26.190 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2472.63, stdev=95.52, samples=19 00:37:26.190 iops : min= 576, max= 640, avg=618.16, stdev=23.88, samples=19 00:37:26.190 lat (msec) : 20=0.26%, 50=99.74% 00:37:26.190 cpu : usr=99.16%, sys=0.45%, ctx=12, majf=0, minf=25 00:37:26.190 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:26.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.190 filename1: (groupid=0, jobs=1): err= 0: pid=1640051: Sat Jul 13 01:02:36 2024 00:37:26.190 read: IOPS=619, BW=2479KiB/s (2538kB/s)(24.2MiB/10017msec) 00:37:26.190 slat (nsec): min=7025, max=86637, avg=34184.71, stdev=20126.71 00:37:26.190 clat (usec): min=21697, max=47359, avg=25459.33, stdev=1342.85 00:37:26.190 lat (usec): min=21713, max=47383, avg=25493.51, stdev=1344.31 00:37:26.190 clat percentiles (usec): 00:37:26.190 | 1.00th=[23987], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:37:26.190 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:26.190 | 70.00th=[25297], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.190 | 99.00th=[27919], 99.50th=[27919], 99.90th=[47449], 99.95th=[47449], 00:37:26.190 | 99.99th=[47449] 00:37:26.190 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2476.80, stdev=85.87, samples=20 00:37:26.190 iops : min= 576, max= 640, avg=619.20, stdev=21.47, samples=20 00:37:26.190 lat (msec) : 50=100.00% 00:37:26.190 cpu : usr=98.24%, sys=0.99%, ctx=112, majf=0, minf=25 00:37:26.190 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:26.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.190 filename1: (groupid=0, jobs=1): err= 0: pid=1640052: Sat Jul 13 01:02:36 2024 00:37:26.190 read: IOPS=619, BW=2477KiB/s (2536kB/s)(24.2MiB/10001msec) 00:37:26.190 slat (nsec): min=6889, max=89402, avg=16214.17, stdev=9997.05 00:37:26.190 clat (usec): min=23386, max=52669, avg=25706.77, stdev=1549.04 00:37:26.190 lat (usec): min=23401, max=52704, avg=25722.98, stdev=1548.87 00:37:26.190 clat percentiles (usec): 00:37:26.190 | 1.00th=[24511], 5.00th=[25035], 10.00th=[25035], 20.00th=[25297], 00:37:26.190 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:37:26.190 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26608], 95.00th=[27657], 00:37:26.190 | 99.00th=[27919], 99.50th=[28181], 99.90th=[52691], 99.95th=[52691], 00:37:26.190 | 99.99th=[52691] 00:37:26.190 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2472.42, stdev=85.89, samples=19 00:37:26.190 iops : min= 576, max= 640, avg=618.11, stdev=21.47, samples=19 00:37:26.190 lat (msec) : 50=99.74%, 100=0.26% 00:37:26.190 cpu : usr=98.50%, sys=0.89%, ctx=179, majf=0, minf=29 00:37:26.190 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:26.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.190 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.190 filename2: (groupid=0, jobs=1): err= 0: pid=1640053: Sat Jul 13 01:02:36 2024 00:37:26.190 read: IOPS=620, BW=2481KiB/s (2541kB/s)(24.2MiB/10008msec) 00:37:26.190 slat (nsec): min=6624, max=84960, avg=36502.47, stdev=16242.78 00:37:26.190 clat (usec): min=12326, max=47070, avg=25460.85, stdev=1144.04 00:37:26.190 lat (usec): min=12336, max=47098, avg=25497.35, stdev=1144.64 00:37:26.190 clat percentiles (usec): 00:37:26.190 | 1.00th=[23987], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:37:26.190 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:26.190 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.190 | 99.00th=[27919], 99.50th=[28181], 99.90th=[40109], 99.95th=[40109], 00:37:26.190 | 99.99th=[46924] 00:37:26.190 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2479.16, stdev=76.45, samples=19 00:37:26.190 iops : min= 576, max= 640, avg=619.79, stdev=19.11, samples=19 00:37:26.190 lat (msec) : 20=0.26%, 50=99.74% 00:37:26.190 cpu : usr=98.61%, sys=0.76%, ctx=72, majf=0, minf=27 00:37:26.190 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:26.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.191 filename2: (groupid=0, jobs=1): err= 0: pid=1640054: Sat Jul 13 01:02:36 2024 00:37:26.191 read: IOPS=619, BW=2477KiB/s (2536kB/s)(24.2MiB/10001msec) 00:37:26.191 slat (nsec): min=6950, max=93447, avg=26970.26, stdev=18040.28 00:37:26.191 clat (usec): min=23462, max=52809, avg=25641.45, stdev=1563.16 00:37:26.191 lat (usec): min=23477, max=52842, avg=25668.42, stdev=1561.48 00:37:26.191 clat percentiles (usec): 00:37:26.191 | 1.00th=[24511], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:37:26.191 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25297], 00:37:26.191 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27657], 00:37:26.191 | 99.00th=[27919], 99.50th=[28181], 99.90th=[52691], 99.95th=[52691], 00:37:26.191 | 99.99th=[52691] 00:37:26.191 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2472.42, stdev=85.89, samples=19 00:37:26.191 iops : min= 576, max= 640, avg=618.11, stdev=21.47, samples=19 00:37:26.191 lat (msec) : 50=99.74%, 100=0.26% 00:37:26.191 cpu : usr=98.57%, sys=0.76%, ctx=177, majf=0, minf=26 00:37:26.191 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:26.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.191 filename2: (groupid=0, jobs=1): err= 0: pid=1640055: Sat Jul 13 01:02:36 2024 00:37:26.191 read: IOPS=620, BW=2483KiB/s (2542kB/s)(24.2MiB/10002msec) 00:37:26.191 slat (nsec): min=6213, max=94487, avg=45204.28, stdev=19626.83 00:37:26.191 clat (usec): min=17582, max=41871, avg=25420.26, stdev=1030.42 00:37:26.191 lat (usec): min=17597, max=41889, avg=25465.47, stdev=1027.24 00:37:26.191 clat percentiles (usec): 00:37:26.191 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:37:26.191 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:26.191 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.191 | 99.00th=[27919], 99.50th=[28181], 99.90th=[35390], 99.95th=[35390], 00:37:26.191 | 99.99th=[41681] 00:37:26.191 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2479.16, stdev=87.55, samples=19 00:37:26.191 iops : min= 576, max= 640, avg=619.79, stdev=21.89, samples=19 00:37:26.191 lat (msec) : 20=0.29%, 50=99.71% 00:37:26.191 cpu : usr=98.29%, sys=0.98%, ctx=61, majf=0, minf=22 00:37:26.191 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:26.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.191 filename2: (groupid=0, jobs=1): err= 0: pid=1640056: Sat Jul 13 01:02:36 2024 00:37:26.191 read: IOPS=619, BW=2478KiB/s (2538kB/s)(24.2MiB/10019msec) 00:37:26.191 slat (nsec): min=6480, max=72229, avg=26213.85, stdev=13671.10 00:37:26.191 clat (usec): min=19425, max=47237, avg=25611.45, stdev=1378.40 00:37:26.191 lat (usec): min=19438, max=47269, avg=25637.67, stdev=1378.21 00:37:26.191 clat percentiles (usec): 00:37:26.191 | 1.00th=[23725], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:37:26.191 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25297], 00:37:26.191 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.191 | 99.00th=[27919], 99.50th=[28181], 99.90th=[46924], 99.95th=[46924], 00:37:26.191 | 99.99th=[47449] 00:37:26.191 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2476.80, stdev=95.38, samples=20 00:37:26.191 iops : min= 576, max= 640, avg=619.20, stdev=23.85, samples=20 00:37:26.191 lat (msec) : 20=0.10%, 50=99.90% 00:37:26.191 cpu : usr=98.74%, sys=0.88%, ctx=17, majf=0, minf=34 00:37:26.191 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:26.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.191 filename2: (groupid=0, jobs=1): err= 0: pid=1640057: Sat Jul 13 01:02:36 2024 00:37:26.191 read: IOPS=620, BW=2481KiB/s (2541kB/s)(24.2MiB/10008msec) 00:37:26.191 slat (nsec): min=6663, max=76629, avg=34720.62, stdev=13838.29 00:37:26.191 clat (usec): min=12305, max=47087, avg=25491.10, stdev=1178.29 00:37:26.191 lat (usec): min=12315, max=47116, avg=25525.82, stdev=1178.23 00:37:26.191 clat percentiles (usec): 00:37:26.191 | 1.00th=[24249], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:37:26.191 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25297], 00:37:26.191 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.191 | 99.00th=[27919], 99.50th=[28181], 99.90th=[40109], 99.95th=[40109], 00:37:26.191 | 99.99th=[46924] 00:37:26.191 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2479.16, stdev=76.45, samples=19 00:37:26.191 iops : min= 576, max= 640, avg=619.79, stdev=19.11, samples=19 00:37:26.191 lat (msec) : 20=0.29%, 50=99.71% 00:37:26.191 cpu : usr=99.03%, sys=0.58%, ctx=23, majf=0, minf=23 00:37:26.191 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:26.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.191 filename2: (groupid=0, jobs=1): err= 0: pid=1640058: Sat Jul 13 01:02:36 2024 00:37:26.191 read: IOPS=619, BW=2479KiB/s (2539kB/s)(24.2MiB/10016msec) 00:37:26.191 slat (nsec): min=6288, max=70928, avg=15687.64, stdev=10973.85 00:37:26.191 clat (usec): min=22051, max=47409, avg=25687.51, stdev=1336.59 00:37:26.191 lat (usec): min=22064, max=47442, avg=25703.19, stdev=1336.31 00:37:26.191 clat percentiles (usec): 00:37:26.191 | 1.00th=[23987], 5.00th=[24773], 10.00th=[25035], 20.00th=[25297], 00:37:26.191 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:37:26.191 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:37:26.191 | 99.00th=[28181], 99.50th=[28181], 99.90th=[47449], 99.95th=[47449], 00:37:26.191 | 99.99th=[47449] 00:37:26.191 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2476.80, stdev=85.87, samples=20 00:37:26.191 iops : min= 576, max= 640, avg=619.20, stdev=21.47, samples=20 00:37:26.191 lat (msec) : 50=100.00% 00:37:26.191 cpu : usr=98.35%, sys=0.95%, ctx=189, majf=0, minf=36 00:37:26.191 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:26.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.191 filename2: (groupid=0, jobs=1): err= 0: pid=1640059: Sat Jul 13 01:02:36 2024 00:37:26.191 read: IOPS=620, BW=2482KiB/s (2542kB/s)(24.2MiB/10005msec) 00:37:26.191 slat (nsec): min=6523, max=89988, avg=46683.45, stdev=15918.68 00:37:26.191 clat (usec): min=8908, max=51505, avg=25359.58, stdev=1770.46 00:37:26.191 lat (usec): min=8915, max=51521, avg=25406.26, stdev=1769.77 00:37:26.191 clat percentiles (usec): 00:37:26.191 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:37:26.191 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:37:26.191 | 70.00th=[25297], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.191 | 99.00th=[27919], 99.50th=[27919], 99.90th=[51643], 99.95th=[51643], 00:37:26.191 | 99.99th=[51643] 00:37:26.191 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2472.42, stdev=85.89, samples=19 00:37:26.191 iops : min= 576, max= 640, avg=618.11, stdev=21.47, samples=19 00:37:26.191 lat (msec) : 10=0.26%, 20=0.26%, 50=99.23%, 100=0.26% 00:37:26.191 cpu : usr=98.89%, sys=0.67%, ctx=27, majf=0, minf=19 00:37:26.191 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:26.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.191 filename2: (groupid=0, jobs=1): err= 0: pid=1640060: Sat Jul 13 01:02:36 2024 00:37:26.191 read: IOPS=620, BW=2482KiB/s (2542kB/s)(24.2MiB/10003msec) 00:37:26.191 slat (nsec): min=9078, max=98078, avg=47110.73, stdev=16182.99 00:37:26.191 clat (usec): min=9097, max=56835, avg=25381.90, stdev=1774.59 00:37:26.191 lat (usec): min=9135, max=56850, avg=25429.01, stdev=1773.56 00:37:26.191 clat percentiles (usec): 00:37:26.191 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:37:26.191 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:37:26.191 | 70.00th=[25297], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:37:26.191 | 99.00th=[27919], 99.50th=[27919], 99.90th=[50594], 99.95th=[50594], 00:37:26.191 | 99.99th=[56886] 00:37:26.191 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2472.63, stdev=85.46, samples=19 00:37:26.191 iops : min= 576, max= 640, avg=618.16, stdev=21.37, samples=19 00:37:26.191 lat (msec) : 10=0.26%, 20=0.29%, 50=99.19%, 100=0.26% 00:37:26.191 cpu : usr=98.92%, sys=0.66%, ctx=18, majf=0, minf=29 00:37:26.191 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:26.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.191 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:26.191 00:37:26.191 Run status group 0 (all jobs): 00:37:26.191 READ: bw=58.2MiB/s (61.0MB/s), 2477KiB/s-2737KiB/s (2536kB/s-2803kB/s), io=585MiB (613MB), run=10001-10044msec 00:37:26.191 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:26.191 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:26.191 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:26.191 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:26.191 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:26.191 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:26.191 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.191 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.191 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.191 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:26.191 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.191 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.191 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.192 bdev_null0 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.192 [2024-07-13 01:02:36.349306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.192 bdev_null1 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:26.192 { 00:37:26.192 "params": { 00:37:26.192 "name": "Nvme$subsystem", 00:37:26.192 "trtype": "$TEST_TRANSPORT", 00:37:26.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:26.192 "adrfam": "ipv4", 00:37:26.192 "trsvcid": "$NVMF_PORT", 00:37:26.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:26.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:26.192 "hdgst": ${hdgst:-false}, 00:37:26.192 "ddgst": ${ddgst:-false} 00:37:26.192 }, 00:37:26.192 "method": "bdev_nvme_attach_controller" 00:37:26.192 } 00:37:26.192 EOF 00:37:26.192 )") 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:26.192 { 00:37:26.192 "params": { 00:37:26.192 "name": "Nvme$subsystem", 00:37:26.192 "trtype": "$TEST_TRANSPORT", 00:37:26.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:26.192 "adrfam": "ipv4", 00:37:26.192 "trsvcid": "$NVMF_PORT", 00:37:26.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:26.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:26.192 "hdgst": ${hdgst:-false}, 00:37:26.192 "ddgst": ${ddgst:-false} 00:37:26.192 }, 00:37:26.192 "method": "bdev_nvme_attach_controller" 00:37:26.192 } 00:37:26.192 EOF 00:37:26.192 )") 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:26.192 01:02:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:26.192 "params": { 00:37:26.192 "name": "Nvme0", 00:37:26.192 "trtype": "tcp", 00:37:26.192 "traddr": "10.0.0.2", 00:37:26.192 "adrfam": "ipv4", 00:37:26.192 "trsvcid": "4420", 00:37:26.192 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:26.192 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:26.192 "hdgst": false, 00:37:26.192 "ddgst": false 00:37:26.192 }, 00:37:26.192 "method": "bdev_nvme_attach_controller" 00:37:26.192 },{ 00:37:26.192 "params": { 00:37:26.192 "name": "Nvme1", 00:37:26.192 "trtype": "tcp", 00:37:26.192 "traddr": "10.0.0.2", 00:37:26.192 "adrfam": "ipv4", 00:37:26.192 "trsvcid": "4420", 00:37:26.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:26.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:26.192 "hdgst": false, 00:37:26.193 "ddgst": false 00:37:26.193 }, 00:37:26.193 "method": "bdev_nvme_attach_controller" 00:37:26.193 }' 00:37:26.193 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:26.193 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:26.193 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:26.193 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:26.193 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:26.193 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:26.193 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:26.193 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:26.193 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:26.193 01:02:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:26.193 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:26.193 ... 00:37:26.193 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:26.193 ... 00:37:26.193 fio-3.35 00:37:26.193 Starting 4 threads 00:37:26.193 EAL: No free 2048 kB hugepages reported on node 1 00:37:31.480 00:37:31.480 filename0: (groupid=0, jobs=1): err= 0: pid=1642002: Sat Jul 13 01:02:42 2024 00:37:31.480 read: IOPS=2845, BW=22.2MiB/s (23.3MB/s)(111MiB/5003msec) 00:37:31.480 slat (nsec): min=6184, max=38917, avg=9462.37, stdev=3365.24 00:37:31.480 clat (usec): min=578, max=5462, avg=2781.44, stdev=513.91 00:37:31.480 lat (usec): min=590, max=5468, avg=2790.91, stdev=513.99 00:37:31.480 clat percentiles (usec): 00:37:31.480 | 1.00th=[ 1762], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2376], 00:37:31.480 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2737], 60.00th=[ 2868], 00:37:31.480 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3261], 95.00th=[ 3687], 00:37:31.480 | 99.00th=[ 4555], 99.50th=[ 4883], 99.90th=[ 5211], 99.95th=[ 5276], 00:37:31.480 | 99.99th=[ 5473] 00:37:31.480 bw ( KiB/s): min=20064, max=25744, per=27.54%, avg=22835.56, stdev=1817.21, samples=9 00:37:31.480 iops : min= 2508, max= 3218, avg=2854.44, stdev=227.15, samples=9 00:37:31.480 lat (usec) : 750=0.02%, 1000=0.11% 00:37:31.480 lat (msec) : 2=2.16%, 4=94.70%, 10=3.01% 00:37:31.480 cpu : usr=95.70%, sys=3.92%, ctx=15, majf=0, minf=9 00:37:31.480 IO depths : 1=0.4%, 2=9.9%, 4=62.0%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:31.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.480 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.480 issued rwts: total=14235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:31.480 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:31.480 filename0: (groupid=0, jobs=1): err= 0: pid=1642003: Sat Jul 13 01:02:42 2024 00:37:31.480 read: IOPS=2473, BW=19.3MiB/s (20.3MB/s)(97.4MiB/5041msec) 00:37:31.480 slat (nsec): min=6175, max=57222, avg=9616.99, stdev=3640.72 00:37:31.480 clat (usec): min=573, max=41318, avg=3187.45, stdev=831.28 00:37:31.480 lat (usec): min=585, max=41330, avg=3197.07, stdev=830.76 00:37:31.480 clat percentiles (usec): 00:37:31.480 | 1.00th=[ 2114], 5.00th=[ 2474], 10.00th=[ 2638], 20.00th=[ 2802], 00:37:31.480 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3130], 00:37:31.480 | 70.00th=[ 3228], 80.00th=[ 3425], 90.00th=[ 3982], 95.00th=[ 4621], 00:37:31.480 | 99.00th=[ 5080], 99.50th=[ 5145], 99.90th=[ 5342], 99.95th=[ 5604], 00:37:31.480 | 99.99th=[41157] 00:37:31.480 bw ( KiB/s): min=18912, max=21040, per=24.18%, avg=20047.22, stdev=705.60, samples=9 00:37:31.480 iops : min= 2364, max= 2630, avg=2505.89, stdev=88.18, samples=9 00:37:31.480 lat (usec) : 750=0.06%, 1000=0.02% 00:37:31.480 lat (msec) : 2=0.63%, 4=89.35%, 10=9.92%, 50=0.02% 00:37:31.480 cpu : usr=96.55%, sys=3.12%, ctx=12, majf=0, minf=9 00:37:31.480 IO depths : 1=0.1%, 2=6.4%, 4=66.7%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:31.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.480 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.480 issued rwts: total=12470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:31.480 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:31.480 filename1: (groupid=0, jobs=1): err= 0: pid=1642004: Sat Jul 13 01:02:42 2024 00:37:31.480 read: IOPS=2458, BW=19.2MiB/s (20.1MB/s)(96.1MiB/5002msec) 00:37:31.480 slat (nsec): min=6196, max=37970, avg=9263.61, stdev=3449.96 00:37:31.480 clat (usec): min=662, max=6200, avg=3225.77, stdev=551.23 00:37:31.480 lat (usec): min=675, max=6207, avg=3235.04, stdev=550.62 00:37:31.480 clat percentiles (usec): 00:37:31.480 | 1.00th=[ 2278], 5.00th=[ 2638], 10.00th=[ 2769], 20.00th=[ 2868], 00:37:31.480 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3097], 60.00th=[ 3163], 00:37:31.480 | 70.00th=[ 3294], 80.00th=[ 3490], 90.00th=[ 3982], 95.00th=[ 4621], 00:37:31.480 | 99.00th=[ 5080], 99.50th=[ 5145], 99.90th=[ 5473], 99.95th=[ 5604], 00:37:31.480 | 99.99th=[ 6194] 00:37:31.480 bw ( KiB/s): min=18960, max=20608, per=23.80%, avg=19735.11, stdev=598.40, samples=9 00:37:31.480 iops : min= 2370, max= 2576, avg=2466.89, stdev=74.80, samples=9 00:37:31.480 lat (usec) : 750=0.02%, 1000=0.03% 00:37:31.480 lat (msec) : 2=0.39%, 4=89.80%, 10=9.76% 00:37:31.480 cpu : usr=96.72%, sys=2.94%, ctx=11, majf=0, minf=9 00:37:31.480 IO depths : 1=0.1%, 2=2.7%, 4=70.3%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:31.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.480 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.480 issued rwts: total=12298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:31.480 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:31.480 filename1: (groupid=0, jobs=1): err= 0: pid=1642005: Sat Jul 13 01:02:42 2024 00:37:31.480 read: IOPS=2648, BW=20.7MiB/s (21.7MB/s)(103MiB/5002msec) 00:37:31.480 slat (nsec): min=6188, max=41778, avg=9528.64, stdev=3409.08 00:37:31.480 clat (usec): min=662, max=43861, avg=2992.62, stdev=1142.24 00:37:31.480 lat (usec): min=674, max=43885, avg=3002.15, stdev=1142.24 00:37:31.480 clat percentiles (usec): 00:37:31.480 | 1.00th=[ 1827], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2573], 00:37:31.480 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 3032], 00:37:31.480 | 70.00th=[ 3130], 80.00th=[ 3228], 90.00th=[ 3589], 95.00th=[ 4080], 00:37:31.480 | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 5538], 99.95th=[43779], 00:37:31.480 | 99.99th=[43779] 00:37:31.480 bw ( KiB/s): min=17987, max=22816, per=25.22%, avg=20915.89, stdev=1508.47, samples=9 00:37:31.480 iops : min= 2248, max= 2852, avg=2614.44, stdev=188.65, samples=9 00:37:31.481 lat (usec) : 750=0.01%, 1000=0.04% 00:37:31.481 lat (msec) : 2=2.08%, 4=92.39%, 10=5.42%, 50=0.06% 00:37:31.481 cpu : usr=96.20%, sys=3.46%, ctx=10, majf=0, minf=9 00:37:31.481 IO depths : 1=0.3%, 2=4.9%, 4=66.4%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:31.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.481 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:31.481 issued rwts: total=13247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:31.481 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:31.481 00:37:31.481 Run status group 0 (all jobs): 00:37:31.481 READ: bw=81.0MiB/s (84.9MB/s), 19.2MiB/s-22.2MiB/s (20.1MB/s-23.3MB/s), io=408MiB (428MB), run=5002-5041msec 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.481 00:37:31.481 real 0m24.219s 00:37:31.481 user 4m51.547s 00:37:31.481 sys 0m4.414s 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:31.481 01:02:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:31.481 ************************************ 00:37:31.481 END TEST fio_dif_rand_params 00:37:31.481 ************************************ 00:37:31.481 01:02:42 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:37:31.481 01:02:42 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:31.481 01:02:42 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:31.481 01:02:42 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:31.481 01:02:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:31.481 ************************************ 00:37:31.481 START TEST fio_dif_digest 00:37:31.481 ************************************ 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:31.481 bdev_null0 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:31.481 [2024-07-13 01:02:42.765960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:31.481 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:31.482 { 00:37:31.482 "params": { 00:37:31.482 "name": "Nvme$subsystem", 00:37:31.482 "trtype": "$TEST_TRANSPORT", 00:37:31.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:31.482 "adrfam": "ipv4", 00:37:31.482 "trsvcid": "$NVMF_PORT", 00:37:31.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:31.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:31.482 "hdgst": ${hdgst:-false}, 00:37:31.482 "ddgst": ${ddgst:-false} 00:37:31.482 }, 00:37:31.482 "method": "bdev_nvme_attach_controller" 00:37:31.482 } 00:37:31.482 EOF 00:37:31.482 )") 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:31.482 "params": { 00:37:31.482 "name": "Nvme0", 00:37:31.482 "trtype": "tcp", 00:37:31.482 "traddr": "10.0.0.2", 00:37:31.482 "adrfam": "ipv4", 00:37:31.482 "trsvcid": "4420", 00:37:31.482 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:31.482 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:31.482 "hdgst": true, 00:37:31.482 "ddgst": true 00:37:31.482 }, 00:37:31.482 "method": "bdev_nvme_attach_controller" 00:37:31.482 }' 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:31.482 01:02:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:31.757 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:31.757 ... 00:37:31.757 fio-3.35 00:37:31.757 Starting 3 threads 00:37:31.757 EAL: No free 2048 kB hugepages reported on node 1 00:37:43.965 00:37:43.965 filename0: (groupid=0, jobs=1): err= 0: pid=1643062: Sat Jul 13 01:02:53 2024 00:37:43.965 read: IOPS=276, BW=34.6MiB/s (36.3MB/s)(348MiB/10046msec) 00:37:43.965 slat (nsec): min=6546, max=55132, avg=17357.33, stdev=6820.89 00:37:43.965 clat (usec): min=7293, max=49301, avg=10797.62, stdev=1270.57 00:37:43.965 lat (usec): min=7311, max=49315, avg=10814.97, stdev=1270.64 00:37:43.965 clat percentiles (usec): 00:37:43.965 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:37:43.965 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:37:43.965 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:37:43.965 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13566], 99.95th=[46400], 00:37:43.965 | 99.99th=[49546] 00:37:43.965 bw ( KiB/s): min=34560, max=38144, per=33.28%, avg=35584.00, stdev=818.02, samples=20 00:37:43.965 iops : min= 270, max= 298, avg=278.00, stdev= 6.39, samples=20 00:37:43.965 lat (msec) : 10=15.42%, 20=84.51%, 50=0.07% 00:37:43.965 cpu : usr=95.84%, sys=3.83%, ctx=23, majf=0, minf=57 00:37:43.965 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:43.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:43.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:43.965 issued rwts: total=2782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:43.965 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:43.965 filename0: (groupid=0, jobs=1): err= 0: pid=1643063: Sat Jul 13 01:02:53 2024 00:37:43.965 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(336MiB/10043msec) 00:37:43.965 slat (nsec): min=6588, max=90142, avg=16712.74, stdev=7749.42 00:37:43.965 clat (usec): min=8284, max=53403, avg=11165.92, stdev=1875.58 00:37:43.965 lat (usec): min=8298, max=53425, avg=11182.63, stdev=1875.66 00:37:43.965 clat percentiles (usec): 00:37:43.965 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:37:43.965 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:37:43.965 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:37:43.965 | 99.00th=[13042], 99.50th=[13435], 99.90th=[53216], 99.95th=[53216], 00:37:43.965 | 99.99th=[53216] 00:37:43.965 bw ( KiB/s): min=30976, max=36608, per=32.18%, avg=34406.40, stdev=1061.71, samples=20 00:37:43.965 iops : min= 242, max= 286, avg=268.80, stdev= 8.29, samples=20 00:37:43.965 lat (msec) : 10=7.36%, 20=92.45%, 50=0.07%, 100=0.11% 00:37:43.965 cpu : usr=96.67%, sys=2.99%, ctx=21, majf=0, minf=227 00:37:43.965 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:43.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:43.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:43.965 issued rwts: total=2690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:43.965 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:43.965 filename0: (groupid=0, jobs=1): err= 0: pid=1643064: Sat Jul 13 01:02:53 2024 00:37:43.965 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(365MiB/10049msec) 00:37:43.965 slat (nsec): min=6531, max=49292, avg=16650.53, stdev=7786.61 00:37:43.965 clat (usec): min=6261, max=53949, avg=10283.09, stdev=1307.68 00:37:43.965 lat (usec): min=6279, max=53963, avg=10299.74, stdev=1307.52 00:37:43.965 clat percentiles (usec): 00:37:43.965 | 1.00th=[ 8291], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:37:43.965 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:37:43.965 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:37:43.965 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12649], 99.95th=[49021], 00:37:43.965 | 99.99th=[53740] 00:37:43.965 bw ( KiB/s): min=36352, max=39168, per=34.96%, avg=37376.00, stdev=632.55, samples=20 00:37:43.965 iops : min= 284, max= 306, avg=292.00, stdev= 4.94, samples=20 00:37:43.965 lat (msec) : 10=34.19%, 20=65.74%, 50=0.03%, 100=0.03% 00:37:43.965 cpu : usr=92.23%, sys=5.21%, ctx=975, majf=0, minf=142 00:37:43.965 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:43.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:43.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:43.965 issued rwts: total=2922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:43.965 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:43.965 00:37:43.965 Run status group 0 (all jobs): 00:37:43.965 READ: bw=104MiB/s (109MB/s), 33.5MiB/s-36.3MiB/s (35.1MB/s-38.1MB/s), io=1049MiB (1100MB), run=10043-10049msec 00:37:43.965 01:02:53 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:43.965 01:02:53 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:43.965 01:02:53 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:43.965 01:02:53 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:43.965 01:02:53 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:43.965 01:02:53 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:43.965 01:02:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.965 01:02:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:43.965 01:02:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.965 01:02:53 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:43.965 01:02:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:43.965 01:02:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:43.965 01:02:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:43.965 00:37:43.965 real 0m11.156s 00:37:43.965 user 0m35.473s 00:37:43.965 sys 0m1.505s 00:37:43.965 01:02:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:43.965 01:02:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:43.965 ************************************ 00:37:43.965 END TEST fio_dif_digest 00:37:43.965 ************************************ 00:37:43.965 01:02:53 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:37:43.965 01:02:53 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:43.965 01:02:53 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:43.965 01:02:53 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:43.966 01:02:53 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:37:43.966 01:02:53 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:43.966 01:02:53 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:37:43.966 01:02:53 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:43.966 01:02:53 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:43.966 rmmod nvme_tcp 00:37:43.966 rmmod nvme_fabrics 00:37:43.966 rmmod nvme_keyring 00:37:43.966 01:02:53 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:43.966 01:02:53 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:37:43.966 01:02:53 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:37:43.966 01:02:53 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1634679 ']' 00:37:43.966 01:02:53 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1634679 00:37:43.966 01:02:53 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1634679 ']' 00:37:43.966 01:02:53 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1634679 00:37:43.966 01:02:53 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:37:43.966 01:02:53 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:43.966 01:02:53 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1634679 00:37:43.966 01:02:54 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:43.966 01:02:54 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:43.966 01:02:54 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1634679' 00:37:43.966 killing process with pid 1634679 00:37:43.966 01:02:54 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1634679 00:37:43.966 01:02:54 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1634679 00:37:43.966 01:02:54 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:43.966 01:02:54 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:45.344 Waiting for block devices as requested 00:37:45.344 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:37:45.603 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:45.603 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:45.603 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:45.862 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:45.862 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:45.862 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:46.120 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:46.120 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:46.120 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:46.120 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:46.380 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:46.380 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:46.380 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:46.639 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:46.639 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:46.639 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:46.899 01:02:58 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:46.899 01:02:58 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:46.899 01:02:58 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:46.899 01:02:58 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:46.899 01:02:58 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:46.899 01:02:58 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:46.899 01:02:58 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:48.804 01:03:00 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:48.804 00:37:48.804 real 1m14.003s 00:37:48.804 user 7m9.587s 00:37:48.804 sys 0m18.636s 00:37:48.804 01:03:00 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:48.804 01:03:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:48.804 ************************************ 00:37:48.804 END TEST nvmf_dif 00:37:48.804 ************************************ 00:37:48.804 01:03:00 -- common/autotest_common.sh@1142 -- # return 0 00:37:48.804 01:03:00 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:48.804 01:03:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:48.804 01:03:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:48.804 01:03:00 -- common/autotest_common.sh@10 -- # set +x 00:37:48.804 ************************************ 00:37:48.804 START TEST nvmf_abort_qd_sizes 00:37:48.804 ************************************ 00:37:48.804 01:03:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:49.064 * Looking for test storage... 00:37:49.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:49.064 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.065 01:03:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:49.065 01:03:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.065 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:49.065 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:49.065 01:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:37:49.065 01:03:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:54.341 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:54.342 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:54.342 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:54.342 Found net devices under 0000:86:00.0: cvl_0_0 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:54.342 Found net devices under 0000:86:00.1: cvl_0_1 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:54.342 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:54.604 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:54.604 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:54.604 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:54.604 01:03:05 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:54.604 01:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:54.604 01:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:54.604 01:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:54.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:54.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:37:54.604 00:37:54.604 --- 10.0.0.2 ping statistics --- 00:37:54.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:54.604 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:37:54.604 01:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:54.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:54.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:37:54.604 00:37:54.604 --- 10.0.0.1 ping statistics --- 00:37:54.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:54.604 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:37:54.604 01:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:54.604 01:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:37:54.604 01:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:37:54.604 01:03:06 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:57.928 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:57.928 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:57.928 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:57.928 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:57.928 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:57.928 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:57.928 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:57.928 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:57.928 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:57.928 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:57.928 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:57.928 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:57.928 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:57.928 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:57.928 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:57.928 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:58.495 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1651470 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1651470 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1651470 ']' 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:58.495 01:03:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:58.495 [2024-07-13 01:03:09.974386] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:58.495 [2024-07-13 01:03:09.974435] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:58.495 EAL: No free 2048 kB hugepages reported on node 1 00:37:58.495 [2024-07-13 01:03:10.048335] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:58.754 [2024-07-13 01:03:10.093987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:58.754 [2024-07-13 01:03:10.094030] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:58.754 [2024-07-13 01:03:10.094038] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:58.754 [2024-07-13 01:03:10.094044] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:58.754 [2024-07-13 01:03:10.094050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:58.754 [2024-07-13 01:03:10.094106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:58.754 [2024-07-13 01:03:10.094213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:58.754 [2024-07-13 01:03:10.094322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:58.754 [2024-07-13 01:03:10.094323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:59.324 01:03:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:59.324 ************************************ 00:37:59.324 START TEST spdk_target_abort 00:37:59.324 ************************************ 00:37:59.324 01:03:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:37:59.324 01:03:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:59.324 01:03:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:37:59.324 01:03:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:59.324 01:03:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:02.614 spdk_targetn1 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:02.614 [2024-07-13 01:03:13.700628] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:02.614 [2024-07-13 01:03:13.729577] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:02.614 01:03:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:02.614 EAL: No free 2048 kB hugepages reported on node 1 00:38:05.899 Initializing NVMe Controllers 00:38:05.899 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:05.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:05.899 Initialization complete. Launching workers. 00:38:05.899 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15671, failed: 0 00:38:05.899 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1275, failed to submit 14396 00:38:05.899 success 724, unsuccess 551, failed 0 00:38:05.899 01:03:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:05.899 01:03:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:05.899 EAL: No free 2048 kB hugepages reported on node 1 00:38:09.184 Initializing NVMe Controllers 00:38:09.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:09.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:09.184 Initialization complete. Launching workers. 00:38:09.184 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8522, failed: 0 00:38:09.184 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1262, failed to submit 7260 00:38:09.184 success 337, unsuccess 925, failed 0 00:38:09.184 01:03:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:09.184 01:03:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:09.184 EAL: No free 2048 kB hugepages reported on node 1 00:38:12.466 Initializing NVMe Controllers 00:38:12.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:12.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:12.466 Initialization complete. Launching workers. 00:38:12.466 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38185, failed: 0 00:38:12.466 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2863, failed to submit 35322 00:38:12.466 success 616, unsuccess 2247, failed 0 00:38:12.466 01:03:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:12.466 01:03:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:12.466 01:03:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:12.466 01:03:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:12.466 01:03:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:12.466 01:03:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:12.466 01:03:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1651470 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1651470 ']' 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1651470 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1651470 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1651470' 00:38:13.402 killing process with pid 1651470 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1651470 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1651470 00:38:13.402 00:38:13.402 real 0m13.995s 00:38:13.402 user 0m55.996s 00:38:13.402 sys 0m2.176s 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:13.402 ************************************ 00:38:13.402 END TEST spdk_target_abort 00:38:13.402 ************************************ 00:38:13.402 01:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:38:13.402 01:03:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:13.402 01:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:13.402 01:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:13.402 01:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:13.402 ************************************ 00:38:13.402 START TEST kernel_target_abort 00:38:13.402 ************************************ 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:13.402 01:03:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:16.691 Waiting for block devices as requested 00:38:16.691 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:16.691 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:16.691 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:16.691 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:16.691 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:16.691 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:16.691 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:16.691 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:16.950 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:16.950 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:16.950 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:16.950 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:17.210 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:17.210 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:17.210 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:17.470 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:17.470 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:17.470 01:03:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:38:17.470 01:03:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:17.470 01:03:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:38:17.470 01:03:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:38:17.470 01:03:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:17.470 01:03:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:38:17.470 01:03:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:38:17.470 01:03:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:38:17.470 01:03:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:17.470 No valid GPT data, bailing 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:38:17.729 00:38:17.729 Discovery Log Number of Records 2, Generation counter 2 00:38:17.729 =====Discovery Log Entry 0====== 00:38:17.729 trtype: tcp 00:38:17.729 adrfam: ipv4 00:38:17.729 subtype: current discovery subsystem 00:38:17.729 treq: not specified, sq flow control disable supported 00:38:17.729 portid: 1 00:38:17.729 trsvcid: 4420 00:38:17.729 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:17.729 traddr: 10.0.0.1 00:38:17.729 eflags: none 00:38:17.729 sectype: none 00:38:17.729 =====Discovery Log Entry 1====== 00:38:17.729 trtype: tcp 00:38:17.729 adrfam: ipv4 00:38:17.729 subtype: nvme subsystem 00:38:17.729 treq: not specified, sq flow control disable supported 00:38:17.729 portid: 1 00:38:17.729 trsvcid: 4420 00:38:17.729 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:17.729 traddr: 10.0.0.1 00:38:17.729 eflags: none 00:38:17.729 sectype: none 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:17.729 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.730 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:17.730 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.730 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:17.730 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:17.730 01:03:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:17.730 EAL: No free 2048 kB hugepages reported on node 1 00:38:21.018 Initializing NVMe Controllers 00:38:21.018 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:21.018 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:21.018 Initialization complete. Launching workers. 00:38:21.018 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90112, failed: 0 00:38:21.018 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 90112, failed to submit 0 00:38:21.018 success 0, unsuccess 90112, failed 0 00:38:21.018 01:03:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:21.018 01:03:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:21.018 EAL: No free 2048 kB hugepages reported on node 1 00:38:24.342 Initializing NVMe Controllers 00:38:24.342 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:24.342 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:24.342 Initialization complete. Launching workers. 00:38:24.342 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 144870, failed: 0 00:38:24.342 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36374, failed to submit 108496 00:38:24.342 success 0, unsuccess 36374, failed 0 00:38:24.342 01:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:24.342 01:03:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:24.342 EAL: No free 2048 kB hugepages reported on node 1 00:38:26.877 Initializing NVMe Controllers 00:38:26.877 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:26.877 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:26.877 Initialization complete. Launching workers. 00:38:26.877 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 138017, failed: 0 00:38:26.877 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34546, failed to submit 103471 00:38:26.878 success 0, unsuccess 34546, failed 0 00:38:26.878 01:03:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:26.878 01:03:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:26.878 01:03:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:38:26.878 01:03:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:26.878 01:03:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:26.878 01:03:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:26.878 01:03:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:26.878 01:03:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:38:26.878 01:03:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:38:27.136 01:03:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:29.669 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:29.669 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:29.669 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:29.669 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:29.669 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:29.669 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:29.669 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:29.669 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:29.929 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:29.929 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:29.929 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:29.929 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:29.929 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:29.929 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:29.929 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:29.929 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:30.867 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:38:30.867 00:38:30.867 real 0m17.298s 00:38:30.867 user 0m8.850s 00:38:30.867 sys 0m4.967s 00:38:30.867 01:03:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:30.867 01:03:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:30.867 ************************************ 00:38:30.867 END TEST kernel_target_abort 00:38:30.867 ************************************ 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:30.867 rmmod nvme_tcp 00:38:30.867 rmmod nvme_fabrics 00:38:30.867 rmmod nvme_keyring 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1651470 ']' 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1651470 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1651470 ']' 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1651470 00:38:30.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1651470) - No such process 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1651470 is not found' 00:38:30.867 Process with pid 1651470 is not found 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:38:30.867 01:03:42 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:33.399 Waiting for block devices as requested 00:38:33.657 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:33.657 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:33.657 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:33.915 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:33.915 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:33.915 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:34.173 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:34.173 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:34.173 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:34.173 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:34.432 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:34.432 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:34.432 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:34.691 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:34.691 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:34.691 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:34.949 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:34.949 01:03:46 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:34.949 01:03:46 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:34.949 01:03:46 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:34.949 01:03:46 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:34.949 01:03:46 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:34.949 01:03:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:34.949 01:03:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:36.852 01:03:48 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:36.852 00:38:36.852 real 0m48.059s 00:38:36.852 user 1m9.039s 00:38:36.852 sys 0m15.711s 00:38:36.852 01:03:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:36.852 01:03:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:36.852 ************************************ 00:38:36.852 END TEST nvmf_abort_qd_sizes 00:38:36.852 ************************************ 00:38:37.112 01:03:48 -- common/autotest_common.sh@1142 -- # return 0 00:38:37.112 01:03:48 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:37.112 01:03:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:37.112 01:03:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:37.112 01:03:48 -- common/autotest_common.sh@10 -- # set +x 00:38:37.112 ************************************ 00:38:37.112 START TEST keyring_file 00:38:37.112 ************************************ 00:38:37.112 01:03:48 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:37.112 * Looking for test storage... 00:38:37.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:37.112 01:03:48 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:37.112 01:03:48 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:37.112 01:03:48 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:37.112 01:03:48 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:37.112 01:03:48 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.112 01:03:48 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.112 01:03:48 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.112 01:03:48 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:37.112 01:03:48 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@47 -- # : 0 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:37.112 01:03:48 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:37.112 01:03:48 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:37.112 01:03:48 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:37.112 01:03:48 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:37.112 01:03:48 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:37.112 01:03:48 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.iEOsDNxTNX 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.iEOsDNxTNX 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.iEOsDNxTNX 00:38:37.112 01:03:48 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.iEOsDNxTNX 00:38:37.112 01:03:48 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zj5jqWM8SQ 00:38:37.112 01:03:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:37.112 01:03:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:37.371 01:03:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zj5jqWM8SQ 00:38:37.371 01:03:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zj5jqWM8SQ 00:38:37.371 01:03:48 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.zj5jqWM8SQ 00:38:37.371 01:03:48 keyring_file -- keyring/file.sh@30 -- # tgtpid=1660206 00:38:37.371 01:03:48 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:37.371 01:03:48 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1660206 00:38:37.371 01:03:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1660206 ']' 00:38:37.371 01:03:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:37.371 01:03:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:37.371 01:03:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:37.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:37.371 01:03:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:37.371 01:03:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:37.371 [2024-07-13 01:03:48.755639] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:38:37.371 [2024-07-13 01:03:48.755687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660206 ] 00:38:37.371 EAL: No free 2048 kB hugepages reported on node 1 00:38:37.372 [2024-07-13 01:03:48.820058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.372 [2024-07-13 01:03:48.861649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.630 01:03:49 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:37.630 01:03:49 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:38:37.630 01:03:49 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:37.630 01:03:49 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:37.630 01:03:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:37.630 [2024-07-13 01:03:49.058086] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:37.630 null0 00:38:37.630 [2024-07-13 01:03:49.090129] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:37.631 [2024-07-13 01:03:49.090460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:37.631 [2024-07-13 01:03:49.098148] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:37.631 01:03:49 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:37.631 [2024-07-13 01:03:49.110178] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:37.631 request: 00:38:37.631 { 00:38:37.631 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:37.631 "secure_channel": false, 00:38:37.631 "listen_address": { 00:38:37.631 "trtype": "tcp", 00:38:37.631 "traddr": "127.0.0.1", 00:38:37.631 "trsvcid": "4420" 00:38:37.631 }, 00:38:37.631 "method": "nvmf_subsystem_add_listener", 00:38:37.631 "req_id": 1 00:38:37.631 } 00:38:37.631 Got JSON-RPC error response 00:38:37.631 response: 00:38:37.631 { 00:38:37.631 "code": -32602, 00:38:37.631 "message": "Invalid parameters" 00:38:37.631 } 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:37.631 01:03:49 keyring_file -- keyring/file.sh@46 -- # bperfpid=1660341 00:38:37.631 01:03:49 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1660341 /var/tmp/bperf.sock 00:38:37.631 01:03:49 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1660341 ']' 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:37.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:37.631 01:03:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:37.631 [2024-07-13 01:03:49.163354] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:38:37.631 [2024-07-13 01:03:49.163397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660341 ] 00:38:37.631 EAL: No free 2048 kB hugepages reported on node 1 00:38:37.889 [2024-07-13 01:03:49.232713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.889 [2024-07-13 01:03:49.273132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:37.889 01:03:49 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:37.889 01:03:49 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:38:37.889 01:03:49 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iEOsDNxTNX 00:38:37.889 01:03:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iEOsDNxTNX 00:38:38.147 01:03:49 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.zj5jqWM8SQ 00:38:38.147 01:03:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.zj5jqWM8SQ 00:38:38.147 01:03:49 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:38:38.147 01:03:49 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:38:38.147 01:03:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.147 01:03:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:38.147 01:03:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.405 01:03:49 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.iEOsDNxTNX == \/\t\m\p\/\t\m\p\.\i\E\O\s\D\N\x\T\N\X ]] 00:38:38.405 01:03:49 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:38:38.405 01:03:49 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:38.405 01:03:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.405 01:03:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.405 01:03:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:38.664 01:03:50 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.zj5jqWM8SQ == \/\t\m\p\/\t\m\p\.\z\j\5\j\q\W\M\8\S\Q ]] 00:38:38.664 01:03:50 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:38:38.664 01:03:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:38.664 01:03:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:38.664 01:03:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:38.664 01:03:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.664 01:03:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.923 01:03:50 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:38:38.923 01:03:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:38:38.923 01:03:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:38.923 01:03:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:38.923 01:03:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.923 01:03:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:38.923 01:03:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.923 01:03:50 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:38.923 01:03:50 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:38.923 01:03:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:39.182 [2024-07-13 01:03:50.603852] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:39.182 nvme0n1 00:38:39.182 01:03:50 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:38:39.182 01:03:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:39.182 01:03:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:39.182 01:03:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:39.182 01:03:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:39.182 01:03:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:39.441 01:03:50 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:38:39.441 01:03:50 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:38:39.441 01:03:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:39.441 01:03:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:39.441 01:03:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:39.441 01:03:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:39.441 01:03:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:39.700 01:03:51 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:38:39.700 01:03:51 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:39.700 Running I/O for 1 seconds... 00:38:40.637 00:38:40.637 Latency(us) 00:38:40.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:40.637 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:40.637 nvme0n1 : 1.00 17772.69 69.42 0.00 0.00 7185.06 4103.12 13848.04 00:38:40.637 =================================================================================================================== 00:38:40.637 Total : 17772.69 69.42 0.00 0.00 7185.06 4103.12 13848.04 00:38:40.637 0 00:38:40.637 01:03:52 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:40.637 01:03:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:40.896 01:03:52 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:38:40.896 01:03:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:40.896 01:03:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:40.896 01:03:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:40.896 01:03:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:40.896 01:03:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.155 01:03:52 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:38:41.155 01:03:52 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:38:41.155 01:03:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:41.155 01:03:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:41.155 01:03:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:41.155 01:03:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:41.155 01:03:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.414 01:03:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:41.414 01:03:52 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:41.414 01:03:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:38:41.414 01:03:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:41.414 01:03:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:38:41.414 01:03:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:41.414 01:03:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:38:41.414 01:03:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:41.414 01:03:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:41.414 01:03:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:41.415 [2024-07-13 01:03:52.912205] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:41.415 [2024-07-13 01:03:52.913132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1572cd0 (107): Transport endpoint is not connected 00:38:41.415 [2024-07-13 01:03:52.914127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1572cd0 (9): Bad file descriptor 00:38:41.415 [2024-07-13 01:03:52.915128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:41.415 [2024-07-13 01:03:52.915141] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:41.415 [2024-07-13 01:03:52.915149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:41.415 request: 00:38:41.415 { 00:38:41.415 "name": "nvme0", 00:38:41.415 "trtype": "tcp", 00:38:41.415 "traddr": "127.0.0.1", 00:38:41.415 "adrfam": "ipv4", 00:38:41.415 "trsvcid": "4420", 00:38:41.415 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:41.415 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:41.415 "prchk_reftag": false, 00:38:41.415 "prchk_guard": false, 00:38:41.415 "hdgst": false, 00:38:41.415 "ddgst": false, 00:38:41.415 "psk": "key1", 00:38:41.415 "method": "bdev_nvme_attach_controller", 00:38:41.415 "req_id": 1 00:38:41.415 } 00:38:41.415 Got JSON-RPC error response 00:38:41.415 response: 00:38:41.415 { 00:38:41.415 "code": -5, 00:38:41.415 "message": "Input/output error" 00:38:41.415 } 00:38:41.415 01:03:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:38:41.415 01:03:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:41.415 01:03:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:41.415 01:03:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:41.415 01:03:52 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:38:41.415 01:03:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:41.415 01:03:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:41.415 01:03:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:41.415 01:03:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:41.415 01:03:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.673 01:03:53 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:38:41.673 01:03:53 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:38:41.673 01:03:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:41.673 01:03:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:41.673 01:03:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:41.673 01:03:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.673 01:03:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:41.931 01:03:53 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:41.931 01:03:53 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:38:41.931 01:03:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:41.931 01:03:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:38:41.931 01:03:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:42.189 01:03:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:38:42.189 01:03:53 keyring_file -- keyring/file.sh@77 -- # jq length 00:38:42.189 01:03:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:42.449 01:03:53 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:38:42.449 01:03:53 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.iEOsDNxTNX 00:38:42.449 01:03:53 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.iEOsDNxTNX 00:38:42.449 01:03:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:38:42.449 01:03:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.iEOsDNxTNX 00:38:42.449 01:03:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:38:42.449 01:03:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:42.449 01:03:53 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:38:42.449 01:03:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:42.449 01:03:53 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iEOsDNxTNX 00:38:42.449 01:03:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iEOsDNxTNX 00:38:42.449 [2024-07-13 01:03:53.983265] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.iEOsDNxTNX': 0100660 00:38:42.449 [2024-07-13 01:03:53.983290] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:42.449 request: 00:38:42.449 { 00:38:42.449 "name": "key0", 00:38:42.449 "path": "/tmp/tmp.iEOsDNxTNX", 00:38:42.449 "method": "keyring_file_add_key", 00:38:42.449 "req_id": 1 00:38:42.449 } 00:38:42.449 Got JSON-RPC error response 00:38:42.449 response: 00:38:42.449 { 00:38:42.449 "code": -1, 00:38:42.449 "message": "Operation not permitted" 00:38:42.449 } 00:38:42.708 01:03:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:38:42.708 01:03:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:42.708 01:03:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:42.708 01:03:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:42.708 01:03:54 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.iEOsDNxTNX 00:38:42.708 01:03:54 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iEOsDNxTNX 00:38:42.708 01:03:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iEOsDNxTNX 00:38:42.708 01:03:54 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.iEOsDNxTNX 00:38:42.708 01:03:54 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:38:42.708 01:03:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:42.708 01:03:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:42.708 01:03:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:42.708 01:03:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:42.708 01:03:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:42.967 01:03:54 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:38:42.967 01:03:54 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.967 01:03:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:38:42.967 01:03:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.967 01:03:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:38:42.967 01:03:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:42.967 01:03:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:38:42.967 01:03:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:42.967 01:03:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.967 01:03:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:43.271 [2024-07-13 01:03:54.548761] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.iEOsDNxTNX': No such file or directory 00:38:43.271 [2024-07-13 01:03:54.548786] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:43.271 [2024-07-13 01:03:54.548806] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:43.271 [2024-07-13 01:03:54.548829] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:43.271 [2024-07-13 01:03:54.548835] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:43.271 request: 00:38:43.271 { 00:38:43.271 "name": "nvme0", 00:38:43.271 "trtype": "tcp", 00:38:43.271 "traddr": "127.0.0.1", 00:38:43.271 "adrfam": "ipv4", 00:38:43.271 "trsvcid": "4420", 00:38:43.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:43.271 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:43.271 "prchk_reftag": false, 00:38:43.271 "prchk_guard": false, 00:38:43.271 "hdgst": false, 00:38:43.271 "ddgst": false, 00:38:43.271 "psk": "key0", 00:38:43.271 "method": "bdev_nvme_attach_controller", 00:38:43.271 "req_id": 1 00:38:43.271 } 00:38:43.271 Got JSON-RPC error response 00:38:43.271 response: 00:38:43.271 { 00:38:43.271 "code": -19, 00:38:43.271 "message": "No such device" 00:38:43.271 } 00:38:43.271 01:03:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:38:43.271 01:03:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:43.271 01:03:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:43.271 01:03:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:43.271 01:03:54 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:38:43.271 01:03:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:43.271 01:03:54 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:43.271 01:03:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:43.271 01:03:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:43.271 01:03:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:43.271 01:03:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:43.271 01:03:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:43.271 01:03:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5lYxXR6U1S 00:38:43.271 01:03:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:43.271 01:03:54 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:43.271 01:03:54 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:43.271 01:03:54 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:43.271 01:03:54 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:43.271 01:03:54 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:43.271 01:03:54 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:43.271 01:03:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5lYxXR6U1S 00:38:43.271 01:03:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5lYxXR6U1S 00:38:43.271 01:03:54 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.5lYxXR6U1S 00:38:43.271 01:03:54 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5lYxXR6U1S 00:38:43.271 01:03:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5lYxXR6U1S 00:38:43.545 01:03:54 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:43.545 01:03:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:43.804 nvme0n1 00:38:43.804 01:03:55 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:38:43.804 01:03:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:43.804 01:03:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:43.804 01:03:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:43.805 01:03:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:43.805 01:03:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:44.064 01:03:55 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:38:44.064 01:03:55 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:38:44.064 01:03:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:44.064 01:03:55 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:38:44.064 01:03:55 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:38:44.064 01:03:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:44.064 01:03:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:44.064 01:03:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:44.324 01:03:55 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:38:44.324 01:03:55 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:38:44.324 01:03:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:44.324 01:03:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:44.324 01:03:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:44.324 01:03:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:44.324 01:03:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:44.583 01:03:55 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:38:44.583 01:03:55 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:44.583 01:03:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:44.583 01:03:56 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:38:44.583 01:03:56 keyring_file -- keyring/file.sh@104 -- # jq length 00:38:44.583 01:03:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:44.843 01:03:56 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:38:44.843 01:03:56 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5lYxXR6U1S 00:38:44.843 01:03:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5lYxXR6U1S 00:38:45.102 01:03:56 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.zj5jqWM8SQ 00:38:45.102 01:03:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.zj5jqWM8SQ 00:38:45.102 01:03:56 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:45.102 01:03:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:45.362 nvme0n1 00:38:45.362 01:03:56 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:38:45.362 01:03:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:45.621 01:03:57 keyring_file -- keyring/file.sh@112 -- # config='{ 00:38:45.621 "subsystems": [ 00:38:45.621 { 00:38:45.621 "subsystem": "keyring", 00:38:45.621 "config": [ 00:38:45.621 { 00:38:45.621 "method": "keyring_file_add_key", 00:38:45.621 "params": { 00:38:45.621 "name": "key0", 00:38:45.621 "path": "/tmp/tmp.5lYxXR6U1S" 00:38:45.621 } 00:38:45.621 }, 00:38:45.621 { 00:38:45.621 "method": "keyring_file_add_key", 00:38:45.621 "params": { 00:38:45.621 "name": "key1", 00:38:45.621 "path": "/tmp/tmp.zj5jqWM8SQ" 00:38:45.621 } 00:38:45.621 } 00:38:45.621 ] 00:38:45.621 }, 00:38:45.621 { 00:38:45.621 "subsystem": "iobuf", 00:38:45.621 "config": [ 00:38:45.621 { 00:38:45.621 "method": "iobuf_set_options", 00:38:45.621 "params": { 00:38:45.621 "small_pool_count": 8192, 00:38:45.621 "large_pool_count": 1024, 00:38:45.621 "small_bufsize": 8192, 00:38:45.621 "large_bufsize": 135168 00:38:45.621 } 00:38:45.621 } 00:38:45.621 ] 00:38:45.621 }, 00:38:45.621 { 00:38:45.621 "subsystem": "sock", 00:38:45.621 "config": [ 00:38:45.621 { 00:38:45.621 "method": "sock_set_default_impl", 00:38:45.621 "params": { 00:38:45.621 "impl_name": "posix" 00:38:45.621 } 00:38:45.621 }, 00:38:45.621 { 00:38:45.621 "method": "sock_impl_set_options", 00:38:45.621 "params": { 00:38:45.621 "impl_name": "ssl", 00:38:45.621 "recv_buf_size": 4096, 00:38:45.621 "send_buf_size": 4096, 00:38:45.621 "enable_recv_pipe": true, 00:38:45.621 "enable_quickack": false, 00:38:45.621 "enable_placement_id": 0, 00:38:45.621 "enable_zerocopy_send_server": true, 00:38:45.621 "enable_zerocopy_send_client": false, 00:38:45.621 "zerocopy_threshold": 0, 00:38:45.621 "tls_version": 0, 00:38:45.621 "enable_ktls": false 00:38:45.621 } 00:38:45.621 }, 00:38:45.621 { 00:38:45.621 "method": "sock_impl_set_options", 00:38:45.621 "params": { 00:38:45.621 "impl_name": "posix", 00:38:45.621 "recv_buf_size": 2097152, 00:38:45.621 "send_buf_size": 2097152, 00:38:45.621 "enable_recv_pipe": true, 00:38:45.621 "enable_quickack": false, 00:38:45.621 "enable_placement_id": 0, 00:38:45.621 "enable_zerocopy_send_server": true, 00:38:45.621 "enable_zerocopy_send_client": false, 00:38:45.621 "zerocopy_threshold": 0, 00:38:45.621 "tls_version": 0, 00:38:45.621 "enable_ktls": false 00:38:45.621 } 00:38:45.621 } 00:38:45.621 ] 00:38:45.621 }, 00:38:45.621 { 00:38:45.621 "subsystem": "vmd", 00:38:45.621 "config": [] 00:38:45.621 }, 00:38:45.621 { 00:38:45.621 "subsystem": "accel", 00:38:45.621 "config": [ 00:38:45.621 { 00:38:45.621 "method": "accel_set_options", 00:38:45.621 "params": { 00:38:45.621 "small_cache_size": 128, 00:38:45.621 "large_cache_size": 16, 00:38:45.621 "task_count": 2048, 00:38:45.621 "sequence_count": 2048, 00:38:45.621 "buf_count": 2048 00:38:45.621 } 00:38:45.621 } 00:38:45.621 ] 00:38:45.621 }, 00:38:45.621 { 00:38:45.621 "subsystem": "bdev", 00:38:45.621 "config": [ 00:38:45.621 { 00:38:45.622 "method": "bdev_set_options", 00:38:45.622 "params": { 00:38:45.622 "bdev_io_pool_size": 65535, 00:38:45.622 "bdev_io_cache_size": 256, 00:38:45.622 "bdev_auto_examine": true, 00:38:45.622 "iobuf_small_cache_size": 128, 00:38:45.622 "iobuf_large_cache_size": 16 00:38:45.622 } 00:38:45.622 }, 00:38:45.622 { 00:38:45.622 "method": "bdev_raid_set_options", 00:38:45.622 "params": { 00:38:45.622 "process_window_size_kb": 1024 00:38:45.622 } 00:38:45.622 }, 00:38:45.622 { 00:38:45.622 "method": "bdev_iscsi_set_options", 00:38:45.622 "params": { 00:38:45.622 "timeout_sec": 30 00:38:45.622 } 00:38:45.622 }, 00:38:45.622 { 00:38:45.622 "method": "bdev_nvme_set_options", 00:38:45.622 "params": { 00:38:45.622 "action_on_timeout": "none", 00:38:45.622 "timeout_us": 0, 00:38:45.622 "timeout_admin_us": 0, 00:38:45.622 "keep_alive_timeout_ms": 10000, 00:38:45.622 "arbitration_burst": 0, 00:38:45.622 "low_priority_weight": 0, 00:38:45.622 "medium_priority_weight": 0, 00:38:45.622 "high_priority_weight": 0, 00:38:45.622 "nvme_adminq_poll_period_us": 10000, 00:38:45.622 "nvme_ioq_poll_period_us": 0, 00:38:45.622 "io_queue_requests": 512, 00:38:45.622 "delay_cmd_submit": true, 00:38:45.622 "transport_retry_count": 4, 00:38:45.622 "bdev_retry_count": 3, 00:38:45.622 "transport_ack_timeout": 0, 00:38:45.622 "ctrlr_loss_timeout_sec": 0, 00:38:45.622 "reconnect_delay_sec": 0, 00:38:45.622 "fast_io_fail_timeout_sec": 0, 00:38:45.622 "disable_auto_failback": false, 00:38:45.622 "generate_uuids": false, 00:38:45.622 "transport_tos": 0, 00:38:45.622 "nvme_error_stat": false, 00:38:45.622 "rdma_srq_size": 0, 00:38:45.622 "io_path_stat": false, 00:38:45.622 "allow_accel_sequence": false, 00:38:45.622 "rdma_max_cq_size": 0, 00:38:45.622 "rdma_cm_event_timeout_ms": 0, 00:38:45.622 "dhchap_digests": [ 00:38:45.622 "sha256", 00:38:45.622 "sha384", 00:38:45.622 "sha512" 00:38:45.622 ], 00:38:45.622 "dhchap_dhgroups": [ 00:38:45.622 "null", 00:38:45.622 "ffdhe2048", 00:38:45.622 "ffdhe3072", 00:38:45.622 "ffdhe4096", 00:38:45.622 "ffdhe6144", 00:38:45.622 "ffdhe8192" 00:38:45.622 ] 00:38:45.622 } 00:38:45.622 }, 00:38:45.622 { 00:38:45.622 "method": "bdev_nvme_attach_controller", 00:38:45.622 "params": { 00:38:45.622 "name": "nvme0", 00:38:45.622 "trtype": "TCP", 00:38:45.622 "adrfam": "IPv4", 00:38:45.622 "traddr": "127.0.0.1", 00:38:45.622 "trsvcid": "4420", 00:38:45.622 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:45.622 "prchk_reftag": false, 00:38:45.622 "prchk_guard": false, 00:38:45.622 "ctrlr_loss_timeout_sec": 0, 00:38:45.622 "reconnect_delay_sec": 0, 00:38:45.622 "fast_io_fail_timeout_sec": 0, 00:38:45.622 "psk": "key0", 00:38:45.622 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:45.622 "hdgst": false, 00:38:45.622 "ddgst": false 00:38:45.622 } 00:38:45.622 }, 00:38:45.622 { 00:38:45.622 "method": "bdev_nvme_set_hotplug", 00:38:45.622 "params": { 00:38:45.622 "period_us": 100000, 00:38:45.622 "enable": false 00:38:45.622 } 00:38:45.622 }, 00:38:45.622 { 00:38:45.622 "method": "bdev_wait_for_examine" 00:38:45.622 } 00:38:45.622 ] 00:38:45.622 }, 00:38:45.622 { 00:38:45.622 "subsystem": "nbd", 00:38:45.622 "config": [] 00:38:45.622 } 00:38:45.622 ] 00:38:45.622 }' 00:38:45.622 01:03:57 keyring_file -- keyring/file.sh@114 -- # killprocess 1660341 00:38:45.622 01:03:57 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1660341 ']' 00:38:45.622 01:03:57 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1660341 00:38:45.622 01:03:57 keyring_file -- common/autotest_common.sh@953 -- # uname 00:38:45.622 01:03:57 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:45.622 01:03:57 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1660341 00:38:45.622 01:03:57 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:45.622 01:03:57 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:45.622 01:03:57 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1660341' 00:38:45.622 killing process with pid 1660341 00:38:45.622 01:03:57 keyring_file -- common/autotest_common.sh@967 -- # kill 1660341 00:38:45.622 Received shutdown signal, test time was about 1.000000 seconds 00:38:45.622 00:38:45.622 Latency(us) 00:38:45.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:45.622 =================================================================================================================== 00:38:45.622 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:45.622 01:03:57 keyring_file -- common/autotest_common.sh@972 -- # wait 1660341 00:38:45.881 01:03:57 keyring_file -- keyring/file.sh@117 -- # bperfpid=1661638 00:38:45.881 01:03:57 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1661638 /var/tmp/bperf.sock 00:38:45.881 01:03:57 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1661638 ']' 00:38:45.881 01:03:57 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:45.881 01:03:57 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:45.881 01:03:57 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:45.881 01:03:57 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:45.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:45.881 01:03:57 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:38:45.881 "subsystems": [ 00:38:45.881 { 00:38:45.881 "subsystem": "keyring", 00:38:45.881 "config": [ 00:38:45.881 { 00:38:45.881 "method": "keyring_file_add_key", 00:38:45.881 "params": { 00:38:45.881 "name": "key0", 00:38:45.881 "path": "/tmp/tmp.5lYxXR6U1S" 00:38:45.881 } 00:38:45.881 }, 00:38:45.881 { 00:38:45.881 "method": "keyring_file_add_key", 00:38:45.881 "params": { 00:38:45.881 "name": "key1", 00:38:45.881 "path": "/tmp/tmp.zj5jqWM8SQ" 00:38:45.881 } 00:38:45.881 } 00:38:45.881 ] 00:38:45.881 }, 00:38:45.881 { 00:38:45.881 "subsystem": "iobuf", 00:38:45.881 "config": [ 00:38:45.881 { 00:38:45.881 "method": "iobuf_set_options", 00:38:45.881 "params": { 00:38:45.881 "small_pool_count": 8192, 00:38:45.881 "large_pool_count": 1024, 00:38:45.881 "small_bufsize": 8192, 00:38:45.881 "large_bufsize": 135168 00:38:45.881 } 00:38:45.881 } 00:38:45.881 ] 00:38:45.881 }, 00:38:45.881 { 00:38:45.881 "subsystem": "sock", 00:38:45.881 "config": [ 00:38:45.882 { 00:38:45.882 "method": "sock_set_default_impl", 00:38:45.882 "params": { 00:38:45.882 "impl_name": "posix" 00:38:45.882 } 00:38:45.882 }, 00:38:45.882 { 00:38:45.882 "method": "sock_impl_set_options", 00:38:45.882 "params": { 00:38:45.882 "impl_name": "ssl", 00:38:45.882 "recv_buf_size": 4096, 00:38:45.882 "send_buf_size": 4096, 00:38:45.882 "enable_recv_pipe": true, 00:38:45.882 "enable_quickack": false, 00:38:45.882 "enable_placement_id": 0, 00:38:45.882 "enable_zerocopy_send_server": true, 00:38:45.882 "enable_zerocopy_send_client": false, 00:38:45.882 "zerocopy_threshold": 0, 00:38:45.882 "tls_version": 0, 00:38:45.882 "enable_ktls": false 00:38:45.882 } 00:38:45.882 }, 00:38:45.882 { 00:38:45.882 "method": "sock_impl_set_options", 00:38:45.882 "params": { 00:38:45.882 "impl_name": "posix", 00:38:45.882 "recv_buf_size": 2097152, 00:38:45.882 "send_buf_size": 2097152, 00:38:45.882 "enable_recv_pipe": true, 00:38:45.882 "enable_quickack": false, 00:38:45.882 "enable_placement_id": 0, 00:38:45.882 "enable_zerocopy_send_server": true, 00:38:45.882 "enable_zerocopy_send_client": false, 00:38:45.882 "zerocopy_threshold": 0, 00:38:45.882 "tls_version": 0, 00:38:45.882 "enable_ktls": false 00:38:45.882 } 00:38:45.882 } 00:38:45.882 ] 00:38:45.882 }, 00:38:45.882 { 00:38:45.882 "subsystem": "vmd", 00:38:45.882 "config": [] 00:38:45.882 }, 00:38:45.882 { 00:38:45.882 "subsystem": "accel", 00:38:45.882 "config": [ 00:38:45.882 { 00:38:45.882 "method": "accel_set_options", 00:38:45.882 "params": { 00:38:45.882 "small_cache_size": 128, 00:38:45.882 "large_cache_size": 16, 00:38:45.882 "task_count": 2048, 00:38:45.882 "sequence_count": 2048, 00:38:45.882 "buf_count": 2048 00:38:45.882 } 00:38:45.882 } 00:38:45.882 ] 00:38:45.882 }, 00:38:45.882 { 00:38:45.882 "subsystem": "bdev", 00:38:45.882 "config": [ 00:38:45.882 { 00:38:45.882 "method": "bdev_set_options", 00:38:45.882 "params": { 00:38:45.882 "bdev_io_pool_size": 65535, 00:38:45.882 "bdev_io_cache_size": 256, 00:38:45.882 "bdev_auto_examine": true, 00:38:45.882 "iobuf_small_cache_size": 128, 00:38:45.882 "iobuf_large_cache_size": 16 00:38:45.882 } 00:38:45.882 }, 00:38:45.882 { 00:38:45.882 "method": "bdev_raid_set_options", 00:38:45.882 "params": { 00:38:45.882 "process_window_size_kb": 1024 00:38:45.882 } 00:38:45.882 }, 00:38:45.882 { 00:38:45.882 "method": "bdev_iscsi_set_options", 00:38:45.882 "params": { 00:38:45.882 "timeout_sec": 30 00:38:45.882 } 00:38:45.882 }, 00:38:45.882 { 00:38:45.882 "method": "bdev_nvme_set_options", 00:38:45.882 "params": { 00:38:45.882 "action_on_timeout": "none", 00:38:45.882 "timeout_us": 0, 00:38:45.882 "timeout_admin_us": 0, 00:38:45.882 "keep_alive_timeout_ms": 10000, 00:38:45.882 "arbitration_burst": 0, 00:38:45.882 "low_priority_weight": 0, 00:38:45.882 "medium_priority_weight": 0, 00:38:45.882 "high_priority_weight": 0, 00:38:45.882 "nvme_adminq_poll_period_us": 10000, 00:38:45.882 "nvme_ioq_poll_period_us": 0, 00:38:45.882 "io_queue_requests": 512, 00:38:45.882 "delay_cmd_submit": true, 00:38:45.882 "transport_retry_count": 4, 00:38:45.882 "bdev_retry_count": 3, 00:38:45.882 "transport_ack_timeout": 0, 00:38:45.882 "ctrlr_loss_timeout_sec": 0, 00:38:45.882 "reconnect_delay_sec": 0, 00:38:45.882 "fast_io_fail_timeout_sec": 0, 00:38:45.882 "disable_auto_failback": false, 00:38:45.882 "generate_uuids": false, 00:38:45.882 "transport_tos": 0, 00:38:45.882 "nvme_error_stat": false, 00:38:45.882 "rdma_srq_size": 0, 00:38:45.882 "io_path_stat": false, 00:38:45.882 "allow_accel_sequence": false, 00:38:45.882 "rdma_max_cq_size": 0, 00:38:45.882 "rdma_cm_event_timeout_ms": 0, 00:38:45.882 "dhchap_digests": [ 00:38:45.882 "sha256", 00:38:45.882 "sha384", 00:38:45.882 "sha512" 00:38:45.882 ], 00:38:45.882 "dhchap_dhgroups": [ 00:38:45.882 "null", 00:38:45.882 "ffdhe2048", 00:38:45.882 "ffdhe3072", 00:38:45.882 "ffdhe4096", 00:38:45.882 "ffdhe6144", 00:38:45.882 "ffdhe8192" 00:38:45.882 ] 00:38:45.882 } 00:38:45.882 }, 00:38:45.882 { 00:38:45.882 "method": "bdev_nvme_attach_controller", 00:38:45.882 "params": { 00:38:45.882 "name": "nvme0", 00:38:45.882 "trtype": "TCP", 00:38:45.882 "adrfam": "IPv4", 00:38:45.882 "traddr": "127.0.0.1", 00:38:45.882 "trsvcid": "4420", 00:38:45.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:45.882 "prchk_reftag": false, 00:38:45.882 "prchk_guard": false, 00:38:45.882 "ctrlr_loss_timeout_sec": 0, 00:38:45.882 "reconnect_delay_sec": 0, 00:38:45.882 "fast_io_fail_timeout_sec": 0, 00:38:45.882 "psk": "key0", 00:38:45.882 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:45.882 "hdgst": false, 00:38:45.882 "ddgst": false 00:38:45.882 } 00:38:45.882 }, 00:38:45.882 { 00:38:45.882 "method": "bdev_nvme_set_hotplug", 00:38:45.882 "params": { 00:38:45.882 "period_us": 100000, 00:38:45.882 "enable": false 00:38:45.882 } 00:38:45.882 }, 00:38:45.882 { 00:38:45.882 "method": "bdev_wait_for_examine" 00:38:45.882 } 00:38:45.882 ] 00:38:45.882 }, 00:38:45.882 { 00:38:45.882 "subsystem": "nbd", 00:38:45.882 "config": [] 00:38:45.882 } 00:38:45.882 ] 00:38:45.882 }' 00:38:45.882 01:03:57 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:45.882 01:03:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:45.882 [2024-07-13 01:03:57.370656] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:38:45.882 [2024-07-13 01:03:57.370705] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661638 ] 00:38:45.882 EAL: No free 2048 kB hugepages reported on node 1 00:38:45.882 [2024-07-13 01:03:57.439767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.141 [2024-07-13 01:03:57.480511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:46.141 [2024-07-13 01:03:57.634049] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:46.709 01:03:58 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:46.709 01:03:58 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:38:46.709 01:03:58 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:38:46.709 01:03:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:46.709 01:03:58 keyring_file -- keyring/file.sh@120 -- # jq length 00:38:46.969 01:03:58 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:38:46.969 01:03:58 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:38:46.969 01:03:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:46.969 01:03:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:46.969 01:03:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:46.969 01:03:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:46.969 01:03:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:47.234 01:03:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:47.234 01:03:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:38:47.234 01:03:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:47.234 01:03:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:47.234 01:03:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:47.234 01:03:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:47.234 01:03:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:47.234 01:03:58 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:38:47.234 01:03:58 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:38:47.234 01:03:58 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:38:47.234 01:03:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:47.495 01:03:58 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:38:47.495 01:03:58 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:47.495 01:03:58 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.5lYxXR6U1S /tmp/tmp.zj5jqWM8SQ 00:38:47.495 01:03:58 keyring_file -- keyring/file.sh@20 -- # killprocess 1661638 00:38:47.495 01:03:58 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1661638 ']' 00:38:47.495 01:03:58 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1661638 00:38:47.495 01:03:58 keyring_file -- common/autotest_common.sh@953 -- # uname 00:38:47.495 01:03:58 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:47.495 01:03:58 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1661638 00:38:47.495 01:03:58 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:47.495 01:03:58 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:47.495 01:03:58 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1661638' 00:38:47.495 killing process with pid 1661638 00:38:47.495 01:03:58 keyring_file -- common/autotest_common.sh@967 -- # kill 1661638 00:38:47.495 Received shutdown signal, test time was about 1.000000 seconds 00:38:47.495 00:38:47.495 Latency(us) 00:38:47.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:47.495 =================================================================================================================== 00:38:47.495 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:47.495 01:03:58 keyring_file -- common/autotest_common.sh@972 -- # wait 1661638 00:38:47.753 01:03:59 keyring_file -- keyring/file.sh@21 -- # killprocess 1660206 00:38:47.753 01:03:59 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1660206 ']' 00:38:47.753 01:03:59 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1660206 00:38:47.753 01:03:59 keyring_file -- common/autotest_common.sh@953 -- # uname 00:38:47.753 01:03:59 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:47.753 01:03:59 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1660206 00:38:47.753 01:03:59 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:47.753 01:03:59 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:47.753 01:03:59 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1660206' 00:38:47.753 killing process with pid 1660206 00:38:47.753 01:03:59 keyring_file -- common/autotest_common.sh@967 -- # kill 1660206 00:38:47.753 [2024-07-13 01:03:59.189274] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:38:47.753 01:03:59 keyring_file -- common/autotest_common.sh@972 -- # wait 1660206 00:38:48.086 00:38:48.087 real 0m11.013s 00:38:48.087 user 0m27.125s 00:38:48.087 sys 0m2.740s 00:38:48.087 01:03:59 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:48.087 01:03:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:48.087 ************************************ 00:38:48.087 END TEST keyring_file 00:38:48.087 ************************************ 00:38:48.087 01:03:59 -- common/autotest_common.sh@1142 -- # return 0 00:38:48.087 01:03:59 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:38:48.087 01:03:59 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:48.087 01:03:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:48.087 01:03:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:48.087 01:03:59 -- common/autotest_common.sh@10 -- # set +x 00:38:48.087 ************************************ 00:38:48.087 START TEST keyring_linux 00:38:48.087 ************************************ 00:38:48.087 01:03:59 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:48.087 * Looking for test storage... 00:38:48.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:48.345 01:03:59 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:48.345 01:03:59 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:48.345 01:03:59 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:48.345 01:03:59 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:48.345 01:03:59 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.345 01:03:59 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.345 01:03:59 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.345 01:03:59 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:48.345 01:03:59 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:48.345 01:03:59 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:48.345 01:03:59 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:48.345 01:03:59 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:48.345 01:03:59 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:48.345 01:03:59 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:48.345 01:03:59 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@705 -- # python - 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:48.345 /tmp/:spdk-test:key0 00:38:48.345 01:03:59 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:48.345 01:03:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:48.345 01:03:59 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:38:48.346 01:03:59 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:38:48.346 01:03:59 keyring_linux -- nvmf/common.sh@705 -- # python - 00:38:48.346 01:03:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:48.346 01:03:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:48.346 /tmp/:spdk-test:key1 00:38:48.346 01:03:59 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1662178 00:38:48.346 01:03:59 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1662178 00:38:48.346 01:03:59 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:48.346 01:03:59 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1662178 ']' 00:38:48.346 01:03:59 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:48.346 01:03:59 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:48.346 01:03:59 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:48.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:48.346 01:03:59 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:48.346 01:03:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:48.346 [2024-07-13 01:03:59.803972] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:38:48.346 [2024-07-13 01:03:59.804021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662178 ] 00:38:48.346 EAL: No free 2048 kB hugepages reported on node 1 00:38:48.346 [2024-07-13 01:03:59.870972] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.604 [2024-07-13 01:03:59.911916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.604 01:04:00 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:48.604 01:04:00 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:38:48.604 01:04:00 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:48.604 01:04:00 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.604 01:04:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:48.604 [2024-07-13 01:04:00.115153] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.604 null0 00:38:48.604 [2024-07-13 01:04:00.147203] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:48.604 [2024-07-13 01:04:00.147540] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:48.862 01:04:00 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.862 01:04:00 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:48.862 57972105 00:38:48.862 01:04:00 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:48.862 533103176 00:38:48.862 01:04:00 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1662183 00:38:48.862 01:04:00 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1662183 /var/tmp/bperf.sock 00:38:48.862 01:04:00 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:48.862 01:04:00 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1662183 ']' 00:38:48.862 01:04:00 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:48.862 01:04:00 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:48.862 01:04:00 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:48.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:48.862 01:04:00 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:48.863 01:04:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:48.863 [2024-07-13 01:04:00.217254] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:38:48.863 [2024-07-13 01:04:00.217300] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662183 ] 00:38:48.863 EAL: No free 2048 kB hugepages reported on node 1 00:38:48.863 [2024-07-13 01:04:00.283339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.863 [2024-07-13 01:04:00.323893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:48.863 01:04:00 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:48.863 01:04:00 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:38:48.863 01:04:00 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:48.863 01:04:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:49.121 01:04:00 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:49.121 01:04:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:49.380 01:04:00 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:49.380 01:04:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:49.380 [2024-07-13 01:04:00.926249] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:49.639 nvme0n1 00:38:49.639 01:04:01 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:49.639 01:04:01 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:49.639 01:04:01 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:49.639 01:04:01 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:49.639 01:04:01 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:49.639 01:04:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:49.639 01:04:01 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:49.639 01:04:01 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:49.639 01:04:01 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:49.639 01:04:01 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:49.639 01:04:01 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:49.639 01:04:01 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:49.639 01:04:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:49.898 01:04:01 keyring_linux -- keyring/linux.sh@25 -- # sn=57972105 00:38:49.898 01:04:01 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:49.898 01:04:01 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:49.898 01:04:01 keyring_linux -- keyring/linux.sh@26 -- # [[ 57972105 == \5\7\9\7\2\1\0\5 ]] 00:38:49.898 01:04:01 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 57972105 00:38:49.898 01:04:01 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:49.898 01:04:01 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:50.157 Running I/O for 1 seconds... 00:38:51.094 00:38:51.094 Latency(us) 00:38:51.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:51.094 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:51.094 nvme0n1 : 1.01 19325.74 75.49 0.00 0.00 6597.18 5328.36 12765.27 00:38:51.094 =================================================================================================================== 00:38:51.094 Total : 19325.74 75.49 0.00 0.00 6597.18 5328.36 12765.27 00:38:51.094 0 00:38:51.094 01:04:02 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:51.094 01:04:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:51.354 01:04:02 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:51.354 01:04:02 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:51.354 01:04:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:51.354 01:04:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:51.354 01:04:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:51.354 01:04:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.354 01:04:02 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:51.354 01:04:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:51.354 01:04:02 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:51.354 01:04:02 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:51.354 01:04:02 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:38:51.354 01:04:02 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:51.354 01:04:02 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:38:51.354 01:04:02 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:51.354 01:04:02 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:38:51.354 01:04:02 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:51.354 01:04:02 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:51.354 01:04:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:51.613 [2024-07-13 01:04:03.025671] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:51.613 [2024-07-13 01:04:03.026031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2180990 (107): Transport endpoint is not connected 00:38:51.613 [2024-07-13 01:04:03.027025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2180990 (9): Bad file descriptor 00:38:51.613 [2024-07-13 01:04:03.028025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:51.613 [2024-07-13 01:04:03.028035] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:51.613 [2024-07-13 01:04:03.028041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:51.613 request: 00:38:51.613 { 00:38:51.613 "name": "nvme0", 00:38:51.613 "trtype": "tcp", 00:38:51.613 "traddr": "127.0.0.1", 00:38:51.613 "adrfam": "ipv4", 00:38:51.613 "trsvcid": "4420", 00:38:51.613 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:51.613 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:51.613 "prchk_reftag": false, 00:38:51.613 "prchk_guard": false, 00:38:51.613 "hdgst": false, 00:38:51.613 "ddgst": false, 00:38:51.613 "psk": ":spdk-test:key1", 00:38:51.613 "method": "bdev_nvme_attach_controller", 00:38:51.613 "req_id": 1 00:38:51.613 } 00:38:51.613 Got JSON-RPC error response 00:38:51.613 response: 00:38:51.613 { 00:38:51.613 "code": -5, 00:38:51.613 "message": "Input/output error" 00:38:51.613 } 00:38:51.613 01:04:03 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:38:51.613 01:04:03 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:51.613 01:04:03 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:51.613 01:04:03 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@33 -- # sn=57972105 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 57972105 00:38:51.613 1 links removed 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@33 -- # sn=533103176 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 533103176 00:38:51.613 1 links removed 00:38:51.613 01:04:03 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1662183 00:38:51.613 01:04:03 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1662183 ']' 00:38:51.613 01:04:03 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1662183 00:38:51.613 01:04:03 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:38:51.613 01:04:03 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:51.613 01:04:03 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1662183 00:38:51.613 01:04:03 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:38:51.613 01:04:03 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:38:51.613 01:04:03 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1662183' 00:38:51.613 killing process with pid 1662183 00:38:51.613 01:04:03 keyring_linux -- common/autotest_common.sh@967 -- # kill 1662183 00:38:51.613 Received shutdown signal, test time was about 1.000000 seconds 00:38:51.613 00:38:51.613 Latency(us) 00:38:51.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:51.613 =================================================================================================================== 00:38:51.613 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:51.613 01:04:03 keyring_linux -- common/autotest_common.sh@972 -- # wait 1662183 00:38:51.872 01:04:03 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1662178 00:38:51.872 01:04:03 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1662178 ']' 00:38:51.872 01:04:03 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1662178 00:38:51.872 01:04:03 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:38:51.872 01:04:03 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:51.872 01:04:03 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1662178 00:38:51.872 01:04:03 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:51.872 01:04:03 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:51.872 01:04:03 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1662178' 00:38:51.872 killing process with pid 1662178 00:38:51.872 01:04:03 keyring_linux -- common/autotest_common.sh@967 -- # kill 1662178 00:38:51.872 01:04:03 keyring_linux -- common/autotest_common.sh@972 -- # wait 1662178 00:38:52.130 00:38:52.130 real 0m4.085s 00:38:52.130 user 0m7.544s 00:38:52.130 sys 0m1.448s 00:38:52.130 01:04:03 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:52.130 01:04:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:52.130 ************************************ 00:38:52.130 END TEST keyring_linux 00:38:52.130 ************************************ 00:38:52.130 01:04:03 -- common/autotest_common.sh@1142 -- # return 0 00:38:52.130 01:04:03 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:38:52.130 01:04:03 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:38:52.130 01:04:03 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:38:52.130 01:04:03 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:38:52.131 01:04:03 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:38:52.131 01:04:03 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:38:52.131 01:04:03 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:38:52.131 01:04:03 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:38:52.131 01:04:03 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:38:52.131 01:04:03 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:38:52.131 01:04:03 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:38:52.131 01:04:03 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:38:52.131 01:04:03 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:38:52.131 01:04:03 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:38:52.131 01:04:03 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:38:52.131 01:04:03 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:38:52.131 01:04:03 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:38:52.131 01:04:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:52.131 01:04:03 -- common/autotest_common.sh@10 -- # set +x 00:38:52.131 01:04:03 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:38:52.131 01:04:03 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:38:52.131 01:04:03 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:38:52.131 01:04:03 -- common/autotest_common.sh@10 -- # set +x 00:38:57.403 INFO: APP EXITING 00:38:57.403 INFO: killing all VMs 00:38:57.403 INFO: killing vhost app 00:38:57.403 INFO: EXIT DONE 00:38:59.936 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:38:59.936 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:38:59.936 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:38:59.936 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:38:59.936 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:38:59.936 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:38:59.936 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:38:59.936 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:38:59.936 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:38:59.936 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:38:59.936 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:38:59.936 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:38:59.936 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:38:59.936 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:38:59.936 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:38:59.936 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:39:00.194 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:39:02.725 Cleaning 00:39:02.725 Removing: /var/run/dpdk/spdk0/config 00:39:02.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:02.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:02.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:02.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:02.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:02.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:02.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:02.726 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:02.726 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:02.726 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:02.726 Removing: /var/run/dpdk/spdk1/config 00:39:02.726 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:02.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:02.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:02.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:02.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:02.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:02.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:02.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:02.984 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:02.984 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:02.984 Removing: /var/run/dpdk/spdk1/mp_socket 00:39:02.984 Removing: /var/run/dpdk/spdk2/config 00:39:02.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:02.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:02.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:02.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:02.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:02.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:02.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:02.984 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:02.984 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:02.984 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:02.984 Removing: /var/run/dpdk/spdk3/config 00:39:02.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:02.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:02.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:02.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:02.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:02.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:02.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:02.984 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:02.984 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:02.984 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:02.984 Removing: /var/run/dpdk/spdk4/config 00:39:02.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:02.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:02.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:02.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:02.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:02.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:02.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:02.984 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:02.984 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:02.984 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:02.984 Removing: /dev/shm/bdev_svc_trace.1 00:39:02.984 Removing: /dev/shm/nvmf_trace.0 00:39:02.984 Removing: /dev/shm/spdk_tgt_trace.pid1195875 00:39:02.984 Removing: /var/run/dpdk/spdk0 00:39:02.984 Removing: /var/run/dpdk/spdk1 00:39:02.984 Removing: /var/run/dpdk/spdk2 00:39:02.984 Removing: /var/run/dpdk/spdk3 00:39:02.984 Removing: /var/run/dpdk/spdk4 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1193734 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1194803 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1195875 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1196510 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1197456 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1197686 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1198657 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1198669 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1199004 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1200514 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1201782 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1202063 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1202350 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1202656 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1202946 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1203197 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1203443 00:39:02.984 Removing: /var/run/dpdk/spdk_pid1203720 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1204457 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1207224 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1207482 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1207736 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1207743 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1208238 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1208246 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1208734 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1208743 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1209002 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1209183 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1209278 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1209493 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1209830 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1210081 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1210373 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1210637 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1210726 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1210938 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1211189 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1211443 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1211691 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1211945 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1212195 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1212441 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1212694 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1212939 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1213187 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1213440 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1213689 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1213936 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1214191 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1214438 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1214683 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1214936 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1215187 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1215437 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1215688 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1215939 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1216013 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1216313 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1219951 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1300518 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1304769 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1315263 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1320522 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1324443 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1325103 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1331056 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1336840 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1336902 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1337665 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1338512 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1339425 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1339893 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1340061 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1340343 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1340358 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1340363 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1341274 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1342184 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1343005 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1343569 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1343575 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1343810 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1345031 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1346009 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1354630 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1354878 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1359098 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1364752 00:39:03.244 Removing: /var/run/dpdk/spdk_pid1367338 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1377284 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1386157 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1387762 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1388685 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1405784 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1409554 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1434207 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1438702 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1440425 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1442491 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1442661 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1442675 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1442907 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1443253 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1445019 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1445775 00:39:03.503 Removing: /var/run/dpdk/spdk_pid1446272 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1448362 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1448860 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1449579 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1453632 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1458988 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1463856 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1499919 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1503721 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1509590 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1510802 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1512323 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1516606 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1520919 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1528268 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1528308 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1532983 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1533220 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1533449 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1533716 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1533865 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1535110 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1536843 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1538501 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1540098 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1541695 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1543303 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1549138 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1549709 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1551460 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1552490 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1558204 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1561466 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1566631 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1571910 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1580265 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1587257 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1587259 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1605296 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1605784 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1606259 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1606876 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1607593 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1608460 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1609120 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1609619 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1613646 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1613873 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1619713 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1619988 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1622205 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1629709 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1629714 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1634910 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1636699 00:39:03.504 Removing: /var/run/dpdk/spdk_pid1638666 00:39:03.763 Removing: /var/run/dpdk/spdk_pid1639779 00:39:03.763 Removing: /var/run/dpdk/spdk_pid1641768 00:39:03.763 Removing: /var/run/dpdk/spdk_pid1642958 00:39:03.763 Removing: /var/run/dpdk/spdk_pid1652190 00:39:03.763 Removing: /var/run/dpdk/spdk_pid1652652 00:39:03.763 Removing: /var/run/dpdk/spdk_pid1653112 00:39:03.763 Removing: /var/run/dpdk/spdk_pid1655376 00:39:03.763 Removing: /var/run/dpdk/spdk_pid1655858 00:39:03.763 Removing: /var/run/dpdk/spdk_pid1656387 00:39:03.763 Removing: /var/run/dpdk/spdk_pid1660206 00:39:03.763 Removing: /var/run/dpdk/spdk_pid1660341 00:39:03.763 Removing: /var/run/dpdk/spdk_pid1661638 00:39:03.763 Removing: /var/run/dpdk/spdk_pid1662178 00:39:03.763 Removing: /var/run/dpdk/spdk_pid1662183 00:39:03.763 Clean 00:39:03.763 01:04:15 -- common/autotest_common.sh@1451 -- # return 0 00:39:03.763 01:04:15 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:39:03.763 01:04:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:03.763 01:04:15 -- common/autotest_common.sh@10 -- # set +x 00:39:03.763 01:04:15 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:39:03.763 01:04:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:03.763 01:04:15 -- common/autotest_common.sh@10 -- # set +x 00:39:03.763 01:04:15 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:03.763 01:04:15 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:03.763 01:04:15 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:03.763 01:04:15 -- spdk/autotest.sh@391 -- # hash lcov 00:39:03.763 01:04:15 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:39:03.763 01:04:15 -- spdk/autotest.sh@393 -- # hostname 00:39:03.763 01:04:15 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:04.022 geninfo: WARNING: invalid characters removed from testname! 00:39:25.968 01:04:35 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:26.536 01:04:38 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:28.440 01:04:39 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:30.345 01:04:41 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:32.250 01:04:43 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:34.153 01:04:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:36.059 01:04:47 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:36.059 01:04:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:36.059 01:04:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:36.059 01:04:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:36.059 01:04:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:36.059 01:04:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.059 01:04:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.059 01:04:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.059 01:04:47 -- paths/export.sh@5 -- $ export PATH 00:39:36.059 01:04:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.059 01:04:47 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:39:36.059 01:04:47 -- common/autobuild_common.sh@444 -- $ date +%s 00:39:36.059 01:04:47 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720825487.XXXXXX 00:39:36.059 01:04:47 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720825487.JsVD0Q 00:39:36.059 01:04:47 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:39:36.059 01:04:47 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:39:36.059 01:04:47 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:39:36.059 01:04:47 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:39:36.059 01:04:47 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:39:36.059 01:04:47 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:39:36.059 01:04:47 -- common/autobuild_common.sh@460 -- $ get_config_params 00:39:36.059 01:04:47 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:39:36.059 01:04:47 -- common/autotest_common.sh@10 -- $ set +x 00:39:36.059 01:04:47 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:39:36.059 01:04:47 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:39:36.059 01:04:47 -- pm/common@17 -- $ local monitor 00:39:36.059 01:04:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:36.059 01:04:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:36.059 01:04:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:36.059 01:04:47 -- pm/common@21 -- $ date +%s 00:39:36.059 01:04:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:36.059 01:04:47 -- pm/common@21 -- $ date +%s 00:39:36.059 01:04:47 -- pm/common@25 -- $ sleep 1 00:39:36.059 01:04:47 -- pm/common@21 -- $ date +%s 00:39:36.059 01:04:47 -- pm/common@21 -- $ date +%s 00:39:36.059 01:04:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720825487 00:39:36.059 01:04:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720825487 00:39:36.059 01:04:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720825487 00:39:36.059 01:04:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720825487 00:39:36.059 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720825487_collect-vmstat.pm.log 00:39:36.059 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720825487_collect-cpu-load.pm.log 00:39:36.059 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720825487_collect-cpu-temp.pm.log 00:39:36.059 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720825487_collect-bmc-pm.bmc.pm.log 00:39:36.999 01:04:48 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:39:36.999 01:04:48 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:39:36.999 01:04:48 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:36.999 01:04:48 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:39:36.999 01:04:48 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:39:36.999 01:04:48 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:39:36.999 01:04:48 -- spdk/autopackage.sh@19 -- $ timing_finish 00:39:36.999 01:04:48 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:36.999 01:04:48 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:39:36.999 01:04:48 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:36.999 01:04:48 -- spdk/autopackage.sh@20 -- $ exit 0 00:39:36.999 01:04:48 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:39:36.999 01:04:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:39:36.999 01:04:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:39:36.999 01:04:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:36.999 01:04:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:39:36.999 01:04:48 -- pm/common@44 -- $ pid=1673060 00:39:36.999 01:04:48 -- pm/common@50 -- $ kill -TERM 1673060 00:39:36.999 01:04:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:36.999 01:04:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:39:36.999 01:04:48 -- pm/common@44 -- $ pid=1673062 00:39:36.999 01:04:48 -- pm/common@50 -- $ kill -TERM 1673062 00:39:36.999 01:04:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:36.999 01:04:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:39:36.999 01:04:48 -- pm/common@44 -- $ pid=1673064 00:39:36.999 01:04:48 -- pm/common@50 -- $ kill -TERM 1673064 00:39:36.999 01:04:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:36.999 01:04:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:39:36.999 01:04:48 -- pm/common@44 -- $ pid=1673086 00:39:36.999 01:04:48 -- pm/common@50 -- $ sudo -E kill -TERM 1673086 00:39:36.999 + [[ -n 1074375 ]] 00:39:36.999 + sudo kill 1074375 00:39:37.009 [Pipeline] } 00:39:37.029 [Pipeline] // stage 00:39:37.035 [Pipeline] } 00:39:37.051 [Pipeline] // timeout 00:39:37.056 [Pipeline] } 00:39:37.072 [Pipeline] // catchError 00:39:37.077 [Pipeline] } 00:39:37.092 [Pipeline] // wrap 00:39:37.098 [Pipeline] } 00:39:37.113 [Pipeline] // catchError 00:39:37.122 [Pipeline] stage 00:39:37.124 [Pipeline] { (Epilogue) 00:39:37.139 [Pipeline] catchError 00:39:37.140 [Pipeline] { 00:39:37.154 [Pipeline] echo 00:39:37.156 Cleanup processes 00:39:37.162 [Pipeline] sh 00:39:37.449 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:37.449 1673178 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:39:37.449 1673459 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:37.464 [Pipeline] sh 00:39:37.747 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:37.747 ++ grep -v 'sudo pgrep' 00:39:37.747 ++ awk '{print $1}' 00:39:37.747 + sudo kill -9 1673178 00:39:37.759 [Pipeline] sh 00:39:38.043 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:48.113 [Pipeline] sh 00:39:48.401 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:48.402 Artifacts sizes are good 00:39:48.422 [Pipeline] archiveArtifacts 00:39:48.431 Archiving artifacts 00:39:48.665 [Pipeline] sh 00:39:48.950 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:48.964 [Pipeline] cleanWs 00:39:48.974 [WS-CLEANUP] Deleting project workspace... 00:39:48.974 [WS-CLEANUP] Deferred wipeout is used... 00:39:48.980 [WS-CLEANUP] done 00:39:48.982 [Pipeline] } 00:39:49.000 [Pipeline] // catchError 00:39:49.011 [Pipeline] sh 00:39:49.292 + logger -p user.info -t JENKINS-CI 00:39:49.301 [Pipeline] } 00:39:49.319 [Pipeline] // stage 00:39:49.324 [Pipeline] } 00:39:49.342 [Pipeline] // node 00:39:49.349 [Pipeline] End of Pipeline 00:39:49.383 Finished: SUCCESS